The New iPad: Initial Thoughts and Data Plan Information

2GIt has finally been announced!  The “New iPad”.

I’m not sure I’m a fan of the naming convention, but the device specifications seem to have been well accepted. I don’t believe Apple went beyond what has been expected, but they did deliver a solid set of updates that I’m sure will continue to do well against the competition. Apple must really plan the features of their new devices carefully since they really only update their product line in each area once per year.  I won’t bother listing all of the new features again here.  You can find a list of the updates almost anywhere else.  The one update I did expect to see that wasn’t present is a larger storage capacity.  I suspect the sweet spot for previous versions of the iPad was at 32 GB. I am a self-proclaimed packrat and when coupled with the ability to display true HD quality graphics, I did expect to see a 128 GB version of the device.  I would have gladly paid $100 or more for the privilege of carrying an extra 64 GB of stuff.

Although I am pleased with the new features that the device will have, I do have a small concern over the screen resolution.  The new device boasts a screen resolution of 2048 x 1536. This resolution is not really that much smaller than the standard 27″ iMac (2560 x 1440). My concern isn’t really in the displaying of graphics or pictures.  Those should be phenomenal.  As an aging gadgeteer, my concern is over the size of the text that will be available in applications like mail. I found a high resolution to be a very negative feature in an earlier Kindle Fire review because the mail app on that device didn’t offer the ability to scale text or alter the font in any usable way.  I’m hoping they have taken this into consideration in the updates to the native apps on the device.

One of the other features that was discussed in the Apple keynote presentation was that the new iPad would be hotspot-capable. Having a device be “hotspot-capable” means that it can act as your internet connection for other Wi-Fi devices. In this case, the iPad would provide a very fast connection for up to five additional devices at a time over the 4G LTE network. Although the operating system software on the device can provide this feature, it looks as though this feature will only be available through Verizon at product launch on March 16, 2012. AT&T currently doesn’t plan to offer this feature, at least not right away. Details of the announcements by both companies can be found here. It should also be noted that Apple’s announcement touted the theoretical maximum speed for this technology.  Although it should be generally faster than existing 3G service, customers in the real world should not expect anywhere close to the 73 Mbps that was discussed in the presentation.

One thing that I have not seen much discussion on so far is the cost of the actual data plans on the 4G network access.  After a bit of digging, I found these plan descriptions.

"New iPad" - Verizon Data Plans

"New iPad" - Verizon Data Plans

"New iPad" - AT&T Data Plans

"New iPad" - AT&T Data Plans

As you can see, the lower tier plans on AT&T are cheaper. Although $14.99 is enticing for a data package, please realize that it is such a small amount of data that you will likely be purchasing additional data anyway.  If you will be using data at all, don’t even bother with the 250 MB plan from AT&T. I was unable to find out what the cost of additional data would be if you exceed your plan level. Traditionally, additional data is sold at $10 per GB.

For those of you interested in using the hotspot feature, it should be noted that Verizon will include this capability in the cost of the data. It is likely that AT&T will charge an additional fee for this feature, if and when they actually roll it out.

Although the iPhone is now available on Sprint, there was no mention of the iPad being available on Sprint’s high speed network.  Has Sprint fallen from Apple’s graces?

For those of you upgrading from an iPad 2, if you own the Apple folding case, don’t give it away with your old device.  It is supposed to still fit on the new device. Also, as you might expect, the standard 30-pin dock connector is still in use.  There is no need to change out any additional charging or connecting accessories.  Although the old chargers should work, we might see the new one that comes with the device have the ability to charge the larger battery on the device faster. This is purely speculation on my part.

If you need me, I’ll be out on the porch waiting for the Fedex delivery…

Posted in 2 Geek, Apple, AT&T, iPad, Kindle Fire, Mac, Technology, Uncategorized, Verizon | Tagged , , , , , , , , , , , , , , | Leave a comment

The Do Not Call List: Has It Stopped Working?

It has happened to you before. You are enjoying time with your family, eating dinner or participating in a business meeting when you phone rings. It is a number that you don’t recognize. It isn’t a local number. It could be from Phoenix, Chicago, Dallas, Cincinnati, or anywhere else for that matter. You think to yourself, “Uncle Joe lives there. Maybe he is trying to get in touch with me about something important. I wonder if he is OK?”  You answer the phone only to hear 3 seconds of silence and an automated voice from “Card Services” that tells you that you can reduce the interest rate that you pay on your credit cards. You are angry that these people have stolen away your time and attention. You may be even more angry when you realize that you paid for their distraction with your limited pool of cell phone minutes. Rightly so.

In recent weeks, the number of these “robocalls” has increased for me. I receive them not only on the home phone, but most often on my cell phone. I probably get 3-4 a week right now. All of the phone numbers associated with me are on the National Do Not Call Registry. Most have been on the list since the day it was available. For those not aware of this list, it is a list maintained by the FTC (Federal Trade Commission) and is enforced by the  FCC (Federal Communications Commission). It allows people to “opt out” of receiving unsolicited telemarketing calls. Apparently, I’m not the only one having this issue, as you can read here.

National Do Not Call Registry Tidbits

  • Adding your number to the registry is FREE.
  • You should only register numbers that you are personally responsible for. Let family and friends register their own numbers.
  • You may add personal home phone and cellphone numbers to the registry. Business lines and fax lines are not covered by this protection.
  • Once you add a number to the registry, telemarketers have 31 days within which they may still call you without violating the law.
  • Businesses that could show that they have an “established business relationship” with you are exempt from this law. Any interaction with the company or any of its subsidiaries can be considered grounds for establishing this relationship. This loophole has recently been closed by requiring your written permission to receive these types of calls.
  • Your phone number will never expire and need to be re-added to the registry. It will only be removed from the registry if you request it, the phone number is disconnected or reassigned.
  • Through the website, you can verify that your number does appear on the registry.
  • You will never receive a legitimate call from someone offering to add your name to the registry. These calls are a scam. Do not share your personal information with them.

More questions and answers about the National Do Not Call Registry can be found here. On February 15, 2012, the FCC made changes to this law.  Details of the changes can be found here.

Even though you may still get some some of these unwanted calls, it is still a really good idea to register. To register phone numbers, you will fill out a form like the one shown below which can be found online here. You may also call 888-382-1222 to have your phone number added.

National Do Not Call Registry: Register A Phone Number

National Do Not Call Registry: Register A Phone Number

If you do continue to receive these unwanted calls, I urge you to file a complaint for each occurrence. If consumers don’t continue to make some noise and let the FCC know there are problems, they will assume that there aren’t any problems. You can file a complaint directly from the National Do Not Call Registry link above.  The following screen captures show the information that will be requested when you file a complaint.

National Do Not Call Registry: Complaint Screen 1

National Do Not Call Registry: Complaint Screen 1

National Do Not Call Registry: Complaint Screen 2

National Do Not Call Registry: Complaint Screen 2

Posted in 1 Geek, Technology, Uncategorized | Tagged , , , , , , , , , , , , , , , , , , | Leave a comment

Secure Shell: Part 1- The Basics

4GIn our technological infancy, no one really was very concerned about security. Use of the telnet application was the common method used to establish text-based connections to remote systems. The downside of telnet is that all of the traffic (usernames, passwords and data) are transmitted in what is known as plaintext or cleartext. That is, it is completely visible to anyone who can watch the traffic on your connection. As we became more aware of the security risks and as “bad guys” became more aggressive in trying to obtain and use personal information, we developed a need for a more secure method of connecting to remote destinations– Secure Shell.

For those of us who work in an environment with many different flavors of Linux, UNIX, Windows and Mac computers, being able to securely access and interact with different machines, transfer data and run command-line applications is essential to our productivity.  One very valuable tool which provides much of this functionality is Secure Shell (ssh). Secure Shell is a standards-based, secure network protocol which can support remote command execution, data transfer or tunneling over otherwise insecure networks. The ssh system follows a basic client-server model where an ssh server is running on the machine you wish to connect to. An ssh client is used to validate your identity and establish a secure connection to the server on the remote system using public key cryptography mechanisms. All modern operating systems support ssh client software and virtually all non-Microsoft operating systems come with an ssh package already installed. Within Windows, you must install third-party software to provide ssh client and server support. The default port used for ssh is 22. Configuration of ssh and associated port forwarding will be considered beyond the scope of this post.

Lets assume that you have access to a remote machine that has an ssh server installed and properly configured.  The first time you attempt to connect to the server, you will see something like this:

$ ssh sample@
The authenticity of host ' (' can't be established.
RSA key fingerprint is d3:7f:16:f8:5c:55:9b:63:c4:c7:4d:ad:df:ff:1f:ea.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '' (RSA) to the list of known hosts.
sample@'s password:
Welcome to Ubuntu 11.10 (GNU/Linux 3.0.0-15-generic x86_64)

 * Documentation:

*** /host/ubuntu/disks/home.disk will be checked for errors at next reboot ***

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.


This very simple connection allows you to now work interactively with the remote system. For all practical purposes, it is as if you were sitting at the console of the other machine. The very first time that you connect to an ssh server, the authenticity of the remote machine cannot be verified. The RSA key fingerprint exchange that is done during the first login allows you to be certain that in the future, you are talking to the same host that you initially made contact with.  If not, you will receive a warning that the identity was not correct and you won’t be able to proceed. When working from a UNIX-based system, a hidden directory called .ssh will be created in your home directory when you use ssh for the first time. It stores information about known hosts that you have successfully connected to in the past (known_hosts) as well as public and private RSA keys that help verify your identity.

In addition to bringing up an interactive shell, you may issue specific commands that will be executed on the remote system, like this:

$ ssh sample@ date
sample@'s password:
Mon Feb 20 12:59:29 PST 2012

$ ssh sample@ df -h /
sample@'s password:
Filesystem            Size  Used Avail Use% Mounted on
/dev/loop0             29G  3.7G   24G  14% /

Granted, the examples above are not the most useful commands, but they do show how to pass commands directly to the remote system for execution. Once the command is executed, the connection is broken and you again receive a command prompt on your local system.

Now, suppose that your mission was to report on how much disk space was available on a large number of systems. You could easily write a script that would just pull the available disk space from the output of the command that was executed remotely.  The example below runs the ssh connection command in a sub-shell by putting it in parenthesis. In this case, it assures us that all of the output of the command is returned over the ssh connection so it can be processed on our local machine.  Without the parenthesis, the entire command would have been executed remotely. The command below shows a UNIX pipeline which, in my opinion, is one of the most powerful concepts in computer science. In short, the output of the first command is passed as input into the second command, the output of the second command is passed to the third, and so on. Each additional piece of the command pipeline can further process or alter the results as they pass by. This alleviates the need to store the output of each operation in a temporary file. This example pulls the fourth field from all lines that contain the string “loop”.

$ (ssh sample@ df -h /) | grep loop | awk '{print $4}'
sample@'s password:

As you might imagine, entering your password every time you want to connect to a remote system can become quite a chore. Many experienced users establish an environment which will allow them to connect to remote machines without entering a password for each connection attempt.  This is known as passwordless-ssh. Essentially, you share a copy of your RSA public key with the host that you would like to connect with. This allows the remote system to verify your identity by matching the shared key with your account information.  Some benefits of passwordless-ssh include the ability to automate interaction with the remote machines without the need to hard-code passwords within the scripts.  On the negative side, if any one of the machines becomes compromised, the perpetrator may impersonate you as they connect to any other machine in your network where your key has been shared. This may be considered a security risk by some organizations. Several different methods exist for creating and sharing these RSA keys. Specific implementation details for each type of system are beyond the scope of this particular post.

One of the things that makes ssh so useful is its simplicity.  Any command that you can run from the command line on the remote system can be executed over ssh.

So far, we have covered remote access, remote command execution and passwordless-ssh. A future post will cover tunneling over ssh, data transfer using the ssh protocol and some tips and tricks to allow you to make the most out of using ssh.

Posted in 4 Geek, Linux, Mac, Security, Software, Software Engineering, Technology, Uncategorized, Windows | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

WPS Security Vulnerability: Ease of Use -> Less Secure

3GSeveral weeks ago, a security researcher by the name of Stefan Viehbock identified a pretty serious vulnerability in the WPS (Wi-Fi Protected Setup) protocol that is supported by most consumer-grade wireless routers produced over the last several years. Although I don’t believe this feature is used very often, the fact that it is supported and is turned on by default in most access points increases the importance of this discovery. A very good and detailed explanation of this vulnerability was done by Steve Gibson on episode 337 of the Security Now! podcast (transcript) on Leo Laporte’s TWiT (This Week in Tech) network. In a nutshell, having this feature enabled on your access point may allow a brute force attack to be carried out which could give a bad guy access to your network. A brute force attack is nothing more than trying many combinations of passwords or PIN numbers and, over time, successfully guessing the right string. As described by Steve Gibson, the flaw here is that the person entering the PIN number for the router is provided feedback after only part of the PIN number is entered. This significantly reduces the number of guesses required to gain access successfully.

In order to be certified by the Wi-Fi Alliance (the governing body for Wi-Fi certification of devices), this feature must be supported and turned on by default. As identified in this publication from the US-CERT (United States Computer Emergency Readiness Team), most manufacturers are impacted by this vulnerability. Conspicuously missing here is Apple. Their implementation of the WPS protocol generates random PINs upon request and, therefore, their products are not impacted. Adding even more security, the only way to request a WPS connection be established is to be connected to the AirPort Utility and initiate the connection attempt. Additional information about this vulnerability can be found here.

What can we do about this? Really, there are two options:

  1. Disable the WPS functionality – Most modern access points give you the opportunity to disable this feature from the web interface. I would suggest turning it off and just leaving it off. Really, you don’t need it.
  2. Upgrade the firmware – Many of the manufacturers of wireless access points have already released firmware updates which should fix this issue. Those who have not yet released updates will do so shortly.

Unfortunately, neither of these options pass the sniff test for implementation.  That is, would the average consumer be able to easily accomplish either of these options on their own? Would they even know where to start? Do they even know the admin password on their wireless router or the URL to visit to access it? Was that little scrap of paper with the password written on it thrown out long ago? If the average consumer doesn’t know how to fix it, they won’t. The repercussions of this vulnerability will be felt for years because of un-patched access points. The flip side of this is, do consumers even know there was a problem? I don’t recall seeing any coverage of this vulnerability in the main stream media. A handful of tools have already been written and made freely available on the internet which exploit this vulnerability.

So, how did we get here? The major reason is due to the desire of the Wi-Fi Alliance organization to simplify how consumers use products with Wi-Fi connectivity. They are walking a tight rope between ease of use and security of our products. This time, they fell off. As consumers, we need to realize that by simplifying things, we reduce how secure they are. I would never let anyone on any network I maintain using the WPS process.

As I passed information about this vulnerability to family and friends, I received some feedback which implied confusion between WPS, WPA, WPA2, WEP, WDS, etc… Hopefully, someone at the Wi-Fi Alliance will wake up and realize that using acronyms that are all very close together does not make it very easy for consumers to make sense out of these things.

For you tech-savvy readers of this blog, please reach out and help some others secure their networks properly.

Update: 2012-02-12 – Hak5 aired an interview in episode 1024 found here which covers the WPS issue with even more detail.

Posted in 3 Geek, Apple, Gadgets, Hardware, Security, Technology, Uncategorized | Tagged , , , , , , , , , , , , , | Leave a comment

Apple Upgraded the AirPort Utility to 6.0 for Lion Users: A Blessing or a Curse?

3GI always thought that one of the negatives of using Apple products in my network was the fact that you could not configure them through a browser.  All other consumer-grade routers that I am aware of provide a browser-based network interface for configuring the security, WAN and LAN settings of the router.  Apple requires the use of an application that they provide now called “AirPort Utility” to accomplish this same set of operations. This utility is available for Microsoft Windows and, of course, OS X, but has never been available for any flavor of Linux.

Apple recently pushed out version 6.0 of the AirPort Utility to Lion (10.7.X) users. Users of earlier versions of OS X are not offered this software upgrade. This utility allows you to configure AirPort base stations, Time Capsules and other Apple networking gear. Although the interface is much simpler to use, they have drawn a line in the proverbial sand, leaving some older Apple networking devices and some previously existing features by the wayside. CNET did an analysis of the features missing from the new version of the AirPort Utility.  The results of their investigation can be found here. As an individual who uses many of these features that are now inaccessible, I am a bit concerned.  In addition to removing the ability to configure a number of previously existing features, the new version also no longer allows you to configure 802.11g and earlier versions of their network products. Fortunately, they have provided a new version, 5.6, which can be found here which can be used to configure aging devices.

On the bright side, one of the advantages of running OS X is the ability to run multiple versions of an application.  I took advantage of this ability so I could have both versions of the AirPort Utility installed.  These steps should be done BEFORE updating to version 6.0. Perform the following steps to have access to both versions.

  • Since the AirPort Utility is a system file and is owned by root (the ultimate and all powerful user on a UNIX system), we must assume the power of root to make these changes.  That is accomplished by executing the following “sudo” command and entering your password when prompted.

NOTE: In the step that follows, you become the “root” user. At that point, you have full power and authority to damage your system and make it inoperable. If you are not comfortable with that, stop now. I accept no responsibility for damage done here.

sudo su -
  • We next need to find out the location of the Applications/Utilities folder on your system.
ls /Volumes/*/Applications/Utilities
  • Make a note of the string that is returned to you. Next, we want to change our working directory to the location we just identified that holds your AirPort Utility. Replace <string> with the path that was returned by the previous command. If the path returned includes spaces, enclose the <string> path entered below in quotation marks.
cd <string>
  • Next, we want to make a copy of your existing version of the AirPort Utility. The “-r” option of the cp (copy) command requests that the copy be made recursively.  That is, copy the item requested and all of its contents including files and subfolders. Under OS X, applications are actually represented as folders that contain all (well, most anyway) of the information and resources needed to execute the program. The “-p” option tells cp to maintain file attributes like access time, modification time and permissions.
cp -rp "AirPort" "AirPort Utility"

Since we no longer need the power of the root user, exit.


If you now look in the Utilities folder within Applications, you should see two versions of the Application utility there, named as above. Double-click the “AirPort Utility” application and feel free to upgrade it to the new version 6.0.  If you run the old version, you may be prompted to upgrade it.  Just decline the upgrade to maintain the old version, as is.

It is worth noting that running multiple instances of some programs is more complicated than implied here.  In this case, the AirPort Utility is reasonably well-behaved and self-contained. Some applications, like browsers or other applications that use customizable user profiles for their settings require more planning to support concurrent versions. It can still be done, however.

In my opinion, unless you desperately need to use version 6.0 of the AirPort Utility, I would decline the upgrade. At some point in the future, users may be forced to take an upgrade, however. For the sake of the advanced users, hopefully a new version of this utility will be made available soon which restores the functionality that has been removed.  I wouldn’t expect older devices to be supported though. Unlike Microsoft, Apple does not always provide backward compatibility in their products.  That is sometimes a blessing and sometimes a curse.

Posted in 3 Geek, Apple, Linux, Mac, Technology, Uncategorized, Utilities, Windows | Tagged , , , , , , , , , , , , , , | Leave a comment

Removing Microsoft Windows Updates: A Temporary Solution to Restore Productivity

3GAlthough most often harmless to existing installed software and computer configurations, occasionally, Microsoft Windows updates can negatively impact your system. It isn’t possible for Microsoft to test every combination of hardware and software prior to shipping these updates. Often, these updates are time critical due to security issues and they are released as fast as possible to prevent serious vulnerabilities from spreading throughout the internet community.

Recently, a client of mine ran into a situation where a specific Windows update (KB2585542 discussed here) caused issues with connectivity between Outlook and Kerio Connect mail server.  Mail could be received, but not sent.  Although I’m not advising to remove Windows updates as a long term solution, it can be an effective short term solution when business productivity is at stake.

Most people are very aware that Microsoft releases periodic updates to their operating systems. Users have several choices about how these are installed:

Windows 7 - Update Choices

Windows 7: Update Choices

From a computer security perspective, selecting “Install updates automatically” is definitely the way to go.  Although the user can select when these updates occur, they are most often scheduled in the middle of the night.  If the installation of the update requires a system reboot (which many do), you may lose any unsaved work when the reboot occurs. It is a good practice to save all work before you leave your computer for any length of time, especially at the end of the day.

In our scenario here, we have investigated and found that a Windows update was installed which breaks existing functionality in Outlook.  Although a fix from Kerio was available, it would require installing a new version of the mail server and appropriate testing. I chose a short term fix of temporarily removing the offending Windows update from the impacted systems until the Kerio update could be properly deployed.

So now that we know we want to temporarily remove a Windows update, how do we do that? Windows updates are treated very much the same as any other piece of software installed on your Windows system. This example uses Windows 7. From the Control Panel, select Programs.

Windows 7 - Programs View

Windows 7: Programs View

Next, select View installed updates.

Windows 7 - View Installed Updates

Windows 7: View Installed Updates

Finally, select the Windows update you would like to remove and click Uninstall.

Windows 7 - Uninstall Update

Windows 7: Uninstall Update

Windows should follow the normal software removal process from this point.

Although the process is similar for all modern versions of Windows, there is a slight change to how this is implemented if you are still running Windows XP.  In Windows XP, from the Add or Remove Programs area in the Control Panel, click Show updates as shown below with the red circle. From there, select the update to be removed and click Remove.

In no way am I advocating the removal of Windows updates as a long term solution to any issue. When in a pinch, where productivity and the ability to operate a business are at stake, temporarily removing an update is probably a reasonable course of action.

It is also worth noting that if the installation of Windows updates is set to happen automatically, the next time it runs it will reinstall the problematic update.  For slightly longer time frames, adjust the frequency of checking for updates or modify your settings to download the updates, but manually choose when to install them. To restore the removed updates, you may manually run Windows Update at any time.


Posted in 3 Geek, Security, Software, Technology, Uncategorized, Windows | Tagged , , , , , , , | Leave a comment

White Spaces: What Is It and Why Do I Care?

3GOn December 22, 2011, the FCC announced approval of the first television white spaces database and device.  This announcement will have an impact on you in the future. An additional relevant FCC announcement can be found here.

In layman’s terms, white spaces are the parts of the radio spectrum that exist between broadcast television stations and any unused channels in any particular television market. As part of the switch to digital television which occurred in 2009, additional parts of the spectrum became available. Under normal circumstances, these parts of the spectrum are sold and licensed for specific television channel use.  This new plan allows for the unused space to be considered unlicensed and available for use by others. Many broadcasters are quite concerned over the possibility of these white space devices interfering with their broadcast spectrum. Common expected uses of this technology include offering broadband to communities (like rural areas, small towns or college campuses), transmission of signals to and from traffic cameras, monitoring of public and private areas (rest stops, public parks, and livestock), medical telemetry and other public safety uses.

The real value of these parts of the spectrum is that, due to their frequency and wavelength, they have the ability to penetrate buildings and other structures with ease. These structural limitations hamper the use of other technology, like cordless phones, baby monitors and Wi-Fi devices that normally operate in the 2.4 GHz area of the spectrum. In addition, these signals can carry over significantly longer distances with improved payload.

There are a few technical hurdles with the use of this white spaces technology.  The first has to deal with how you can tell which frequencies are available in any particular geographic area for unlicensed use.  The second is the development of radio equipment that can take advantage of the available frequencies. Solutions currently exist for both of these issues.

The solution to the first issue is to track the use of licensed spectrum in specific areas by location. The plan is to have a handful of companies administer databases containing the spectrum use information. The first approved company in this category is Spectrum Bridge, Inc. Approval of about ten database administrators is likely, including Google and Microsoft.


Koos Agility Data Radio

The solution to the second issue is the development of software-defined radios which can adjust transmit power, gain and frequency to comply with the goal of avoiding interference on previously allocated channels or when competing signals are detected. Koos Technical Services, Inc., makes the first officially approved device that complies with the FCC ruling. Details of the product can be found here. This first devices will work in conjunction with the Spectrum Bridge database and is expected to go online for a live trial Wilmington, NC, on January 26, 2012. This trial will involve bringing broadband internet service to this rural area. Upon successful trials here, the technology will likely be expanded to other areas of the country.

Much debate (and here and many others) has occurred about how best to handle the use of this spectrum.  Some claim it is akin to real estate and it should be auctioned off to the highest bidder for exclusive use. In my opinion, the prevailing decision favors the true advancement of technology over continued corporate greed. Several articles I have read seem to point to the significant advances that happened in technology due to the free and open availability of the Wi-Fi bands more than a decade ago. Although many billions of dollars could have been made through the sale of this spectrum, the impact to the economy could be far greater if all can use this spectrum freely.  The FCC’s decision clearly supports this notion. Many companies, such as this one, are already positioning themselves for the inevitable push to market this new technology.

Although not a doctor, (I don’t even play one one TV), I do have some concerns over the potential health risks associated with the use of the white spaces technology. The thing that makes the proposed use of this spectrum suspect for me is that it is two-way.  That is, in order to use this spectrum as intended, you must not only be passively receiving the signals (like a TV), but that you are transmitting with some higher level of power to also send data over these frequencies back to another receiver.  Studies have shown that it is not only the frequencies being transmitted, but the general proximity of a high-powered transmitter that can cause harm. Granted, the power of these transmitters will likely be significantly less than those used for a normal television broadcast, but they will need to be higher than a standard Wi-Fi, Bluetooth device or cell phone as well. An article that discusses more of the potential health risks can be found here.  My goal is not to be an alarmist, just a cautious adopter of this technology.

I do believe that this technology will be adopted and widely used. I also believe a bit more study and governance may be required to protect the safety of the general public from misuse or abuse of this new slice of the technological pie.

Posted in 3 Geek, Hardware, Medical, Software, Technology, Uncategorized | Tagged , , , , , , , , , , , , , , , , , , , | Leave a comment

Roku 2 XS: A Good Buy For Streaming Media

2GAlthough I already own an HD TiVo and Apple TV, I have been looking for a streaming media alternative that would allow me to use other video sources for media consumption. For whatever reason, HD TiVo seems to have problems with audio that is streaming from Netflix and Apple TV doesn’t support Netflix. While at Costco yesterday, I ran across the Roku 2 XS which they had on sale for $85 with an HDMI cable and 2 months of Hulu Plus included. I pulled the trigger.

The Roku family of products are designed to allow easy access to streaming media from hundreds of different sources.  Many of these are free like Crackle and Pandora, but some do require a monthly fee like Netflix or Hulu. The available channel lineup can be found here.

Roku Box

Roku Box

Additional information about this device can be found here.

This review is after initial setup and several hours of use. I have set up several channels, including Netflix, Hulu, Amazon and several free channels.

The device is very small– about the size of a hockey puck. Actually, it is almost too small to support the cables that are connected. I did have a small problem during the initial setup. After the initial software update, I was forced to power cycle the device to get the device to connect to the Roku service.  This seems to be a rather common issue and it is something that Roku should be able to fix. This may have also been a Wi-Fi issue since I switched from wireless to wired during this fix also. I was successfully connected to my Wi-Fi network though.

Roku Activation Issue

Roku Activation Issue

One of the chief competitors to this device is the Apple TV.  Here is a brief comparison of the high level features.

Comparison to Apple TV

  • HDMI output only on Apple TV.  Composite and HDMI available on Roku. This allows Roku to be used with almost any TV, where Apple TV will require a more modern TV.
  • Apple TV only allows content from the Apple ecosystem, while Roku allows content from almost all other sources, but the Apple ecosystem.
  • Roku has a USB port which provides the ability to play local content from external USB sources.  The USB port is located on the side of the device, not the back, as one might expect.  The Apple TV provides a USB port as well, but it is only for maintenance of the device, not external storage.
  • Roku provides a micro SD slot for additional storage on the device. Unfortunately, this is related to an “Ugly” down below.
  • Physically, the Apple TV is a bit bigger.
Comparing Roku and Apple TV

Comparing Roku and Apple TV

Apple TV Back

Apple TV Back

Roku Back

Roku Back

The Good

  • Easy setup
  • Comfortable and well designed motion-sensing remote that allows for game playing (Angry Birds came free in my bundle)
  • Good selection of “channels”, some paid (like Amazon, Netflix and Hulu), but many free, including Crackle, TED, Revision3 and TWiT.
  • Wi-Fi and ethernet connections
  • Simple user interface
  • Even using composite video, video quality is quite good.
  • USB connection supports FAT16, FAT32, NTFS and HFS+.
  • Multiple local USB hard drives are supported through the use of a USB hub.  For spinning drives, use of external power may be required since the Roku provides very little power over USB.

The Bad

  • Each of the interfaces that are used to enter usernames and passwords for paid services are designed differently and have different layouts.
  • To even sign up for Roku, you need to enter credit card information.
  • During setup, I had to power cycle the device to clear a connection error.  This seems to be a pretty common error that Roku should be able to fix.
  • I would like to have the ability to pull in RSS feeds or other podcasts that don’t specifically have an application on Roku.  I expected to have the ability to just enter a URL for additional content to be picked up.
  • I did find it a bit unusual to have to load USB support as a channel.  I expected USB support to be automatically present on the device.
  • The supported file types from USB are very limited.
  • There is a discrepancy in the documentation about the video formats that the Roku player supports. The help screen of the player claims support for MKV files.  This file type is not listed on the web site here.
USB Supported Types

USB Supported Types

The Ugly

  • Apparently, the number of channels you can have on the Roku is limited.  I’m not sure exactly what that limit is, but if you exceed it, you must either add a micro SD card to hold the additional data or be forced to download channels again to use them.  I believe that this is a design flaw.
  • Even after reading the available documentation on purchasing channels, I’m not clear if these are single purchases or if they are a monthly fee.  This doesn’t really say.


In theory, if you wanted to remove yourself from the Apple ecosystem, you could convert your iTunes library and have it available on Roku from an external USB device.  I don’t know how well this would work though, since the interface for USB media is not very friendly.

USB Media

USB Media

USB Media Selection

USB Media Selection

If you are looking for a moderately priced gift for someone in your life who enjoys music, movies and other online media, the Roku 2 XS would be a really good choice. There are also several other flavors of Roku boxes that can be found here.  I think any one of them would be a welcome addition your technology arsenal.

Posted in 1 Geek, Apple TV, Gadgets, Roku 2 XS, Technology, TiVo, Uncategorized | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Kindle Fire: Unboxing and First Impressions

As indicated in an earlier post, I pre-ordered a Kindle Fire.  It arrived safely today.  Here are a few photos of the unboxing process.  I was amused by the packaging which made it look as though it was packed specifically for shipping.  I was under the impression that these devices were also going to be sold at big box stores.  I expected a package with a bit more “store shelf” reading material, like specifications, warranty, etc…  It arrived in what looks to be a plain brown shipping box, form fit for the device.

An inside pocket on the cover holds a single card with basic operating instructions. Under the Kindle Fire, is a single micro-USB power cord in a cardboard sheath.  Although it supports a connection to a computer over USB to transfer files, no additional cable is provided.

Kindle Fire Open Box

Kindle Fire Open Box


Kindle Fire Tray

Kindle Fire Tray


The device arrived nearly fully charged.  After power up and setting up Wi-Fi, it new who I was.  It downloaded a software update and was ready for use.  Prior to writing this review, I spent roughly two hours with the device.  I didn’t have time to check out all of the functionality, but here are my impressions of what I did work with.

The Good

  • I really like the 7″ form factor.  It is easy to hold and although not sticky, it has a slip-resistant back.
  • The screen is quite crisp and video quality of several downloaded movies and trailers was quite good.
  • I was impressed by how fast content downloaded.  I am an Amazon Prime subscriber and I downloaded several movies and videos to the device to get some idea of the quality.  A full length video downloaded to the device, not buffering, in less than two minutes.  This allowed me to move forward and backward within the movie easily.
  • There is quite a bit of good content available for free to Amazon Prime members.
  • Battery life on the device seems to be quite good, as advertised.  In two hours of heavy use, it used roughly 25% of the battery. I do not believe that the battery is end-user replaceable though.
  • I like Pulse.  Pulse is the Fire’s news aggregator.  I like the layout.  Refreshing after adding content sources took much longer than I expected, however.
  • The browser is quite fast and allows for 10 tabs to be open at once.
  • The device is fast and scrolling is very fluid.
  • The device has the standard Kindle reading experience– no more, no less.

The Bad

  • There are some books that I have purchased from Amazon that I no longer want to see presented to me.  I was unable to find a way, from the device, to no longer show these items.
  • I was unable to find a way to perform a screen capture on the device.
  • When the device is powered up, it assumes it is in portrait mode.  If held in landscape mode, the screen orientation is incorrect at power up.
  • It feels a bit heavier than I expected it to be.
  • I still have security concerns over sending all of my browser traffic through Amazon servers.
  • Although you can password protect the device, there is no mechanism to wipe the device after a certain number of failed attempts.  This is quite standard on most portable devices today.
  • Many screens on the device support a “back arrow” to go back to a previous screen.  I often needed to tap it more than once to get it to react.  It is almost as if you have to tap once to de-select the current pane and then once to select the “back arrow” button.
  • In bright light, I found the screen to be a bit too reflective for my taste.  For ease of reading, I dimmed the lights and reduced the brightness on the device.

The Ugly

  • The email application provided does not support Microsoft Exchange.  This makes it a consumer-only device, not something a business person would be able to carry in lieu of a laptop or iPad.
  • I found no way to increase or alter the font within the email application.  This will significantly limit the use of this device as an email appliance for anyone without perfect sight. I found the font used to display the body of the messages to be very small and too hard to read for any length of time.  The application does not allow you to zoom in on content like many competitive devices do.
  • I connected the Kindle Fire to my Mac and transferred several PDF documents. When I view the Documents area from the Kindle Fire, additional files with similar names show up.  I didn’t spend enough time with it to determine what exactly these are, but they are pretty annoying.  It didn’t seem to generate a thumbnail view of the documents I transferred either.
  • After my two hours of use, my eyes are fatigued and I have a headache.  I’m not sure if it is because I spent too long trying to adjust the tiny font in the email application or if it has something to do with the font used throughout the device.
  • I found absolutely no features related to accessibility on this device.
  • The Fire doesn’t support many common video formats.  I transferred several videos in .m4v and .mov format which could not be played on the device. This appears to be a method for Amazon to force you to purchase content directly from them instead of using your own content or media from other vendors.
  • The Kindle Fire User’s Guide comes pre-loaded on the device.  Unfortunately, it isn’t complete and refers you to other online sources for additional information which I didn’t find to be complete either.  For example, there is very little documentation that I could find relating to the security settings of the device, credential storage and device administrators.


The Kindle Fire has some good points but it has a number of issues that would make me think twice about recommending it to the masses.  The biggest issue for me is the email client that is provided.  I would estimate that 50% of the time that I am using my iPad, I am reading or responding to email. The inability to adjust the look of the content on the Fire would prevent me from carrying this device regularly.

If you are already an Amazon Prime member, have purchased most of your content from Amazon already or are interested in purchasing future content directly from Amazon, this may be a good choice at $200.  If you are planning on using this device as a travel companion where a decent Wi-Fi connection is unavailable or cost prohibitive (like on a plane), this may not be the best choice for you.  This device is definitely not a replacement for a laptop and should not be considered a content-creation device.

I would strongly recommend borrowing a device from a friend or stopping at a big box store and giving it a spin before plunking down cold hard cash for it.  Since this device has very little (8 GB) storage capacity, I would also be certain that I had a very good Wi-Fi connection in any location that I wanted to use the device.  Without an internet connection for streaming media, this becomes a much less useful device.

In a nutshell, this is a version 1.0 device.  Due to it’s price, it will undoubtedly steal away sales from competitors like the iPad.  Although well built, I don’t believe it has the “fit and finish” or attention to detail that the Apple iPad has while providing only a subset of the functionality.

Posted in 1 Geek, Amazon, Gadgets, Hardware, iPad, Kindle Fire, Review, Security, Technology, Uncategorized | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Traits and Skills of a Good Software Test Engineer

I have been in the software industry for nearly 25 years.  In that time, I have had various roles including software developer, trainer, software builder, tester, development manager, test manager, test lead and software release coordinator.  I recently put together some job descriptions for test positions that my company would like to fill. As I wrote them, I jotted down some traits and skills that I thought the ideal test candidates would have. I want to take a few minutes to share those traits I have identified with you.


In a nutshell, a software tester is someone who attempts to verify that a particular piece of code or product meets a set of pre-defined requirements.  Similarly, software testers can also help identify areas where the product definition is incomplete or ambiguous. These requirements can be defined as functionality, performance, reliability or other similar characteristics. Often, these requirements are determined by analyzing “use cases” which define how those who use the product will actively interact with it.

Activities that software testers routinely perform include writing test plans, reviewing requirements and designs for testability, participating in code reviews, automating execution of tests, generation of test data, opening bug reports when problems are found, “triaging” bugs, execution of tests and analysis of test results. In some cases, testers are also involved in creating the software tools and processes needed to execute or track results of the testing. Each organization I have ever worked with defined the role of a software tester slightly differently based on the existing corporate culture.

Traits and Skills

So, what does it take to be a good software tester?  Here are some very valuable traits that all successful software testers will share.  These are in no particular order.

  • Attention To Detail – To testers, the details matter. Is the error message returned by the product accurate and spelled correctly? Did the documentation reflect the true operation of the system?  Were the results correct and consistent?  How long did the application take to run? Did it produce any unexpected results?  Did it crash?  What impact did running the software have on the system?  How much memory did it use? How much of the processor did it use?  Did it work as expected and produce meaningful and correct results?
  • Know the Customer – Good testers have a desire to find out as much as they can about how the customers will be using the product and what their expectations of the product are.  They may be observers during customer training or demos, read reviews of the product that customers have written or participate in discussions with customers directly about the product. They must understand that there are different types of customers– first time users, those who have used similar products, advanced users and others.  Each type of user will use the product differently and have different expectations of what the product should do or how it should work.  In many test organizations, testers are considered to be the “voice of the customer”.
  • Be the Customer – If at all possible, the tester should be a regular user of the product. There is no better way to learn about a product than to try to integrate it into your work or daily life. In Microsoft terminology, this is “eating your own dogwood” or “dogfooding”.  Microsoft and other companies often release early versions of their products to be used internally. This allows them to get feedback from technically-savvy “friendly” users who are actively using the product earlier in the development cycle.  Many defects are often found during “dogfooding”, but there is a price. Occasionally, significant problems are found which impact the functionality or stability of the product. This can lead to lost data, lost productivity and lost time for those participating in the early release.
  • Creativity – Testers need to not only think about how the product was intended to be used, but also how customers may use the product in other ways– often ways that were not intended and are not supported.
  • Training – The best testers have had some form of training about common testing techniques and tools.  They are familiar with common problems found during testing like buffer overruns, unexpected input and data corruption. Many testers hold degrees in computer science, software engineering or another technical field. As with many professions, it is best to keep up with current methodologies or trends through additional training, participation in online forums, reading available literature or other self study.
  • Troubleshooting Skills – When tests do not produce expected results, it is often up to the tester to identify the problem.  Was it a problem with the use case or a missed requirement? Documentation? Hardware? Network? Could it be a problem with the test itself? Was the test executed properly? Basic troubleshooting skills are needed to zero in on the root cause of the failure. Depending on the nature of the testing, the type of product and the skills of the tester, they may wish to investigate the problem further within the source code of the product.
  • Quick Learner – Testers need to be able to “ramp up” on new products and changes to products quickly. Most often, they are the first real users of a fully-integrated system. This makes them a valuable resource in reviewing user guides, installation documentation and other end user deliverables.
  • Domain Knowledge – The best testers will have experience with the technology being used, previous versions of the product or competitive products.
  • Organizational Skills – Most software testers deal with massive amounts of data. They work with large numbers of tests, multiple runs of the tests, status of each test, log files, test output, etc…  They must be able to quickly summarize test results either manually or through the use of automated tools, to allow tracking of the status of the project to occur in a timely manner.
  • Development Experience – Those testers who have worked at some point as developers and are familiar with the software development environment and programming methodologies being used are more productive. Having this experience allows them to more easily troubleshoot issues during testing since they can more easily understand how the application should work by reading through the available source code. For very tricky problems, testers may even modify product code to provide additional debugging or troubleshooting information when executed.
  • Good Decision Maker – It needs to be understood that it is impossible to completely test any substantial software system. Part of the responsibility of being a software tester is to help identify risks to the product and the schedule and to prioritize testing effort to allow for the most effective use of time.
  • Good Communication Skills – Testers need to be able to communicate effectively. Being able to find a software defect is not of much use of you cannot properly inform others of your findings.  Often, testers and developers interact to form a better understanding of the product.  To do this, it is essential that testers have the ability to formulate and ask questions that, when answered, provides the information they seek to perform their job.
  • Integrity – The ultimate job of the tester is to identify defects in the product as early as possible, even if shining a light on the defect causes repercussions for developers, the product or the product schedule. The reason teams invest in testers is to find problems. Testers who would cover up problems or in some other way misrepresent the severity of a defect are not fulfilling their duties. A tester who is willing to compromise their integrity is in the wrong line of work.
  • Passion and Tenacity – Testers must want to make the product better. As the product matures, it often becomes much more difficult to find additional defects. Testers must be willing to keep looking. There is always at least one more bug in product.
  • Curiosity – Often, good testers are those individuals who used to dismantle their toys to “see how they worked”. Sometimes, they were even able to reassemble them into some sort of working condition.

Although not an exhaustive list, it does cover most of the fundamentals. Different skill sets and traits will be valued differently by each company based on maturity of the product, existing staff, corporate culture and expectations of the team.

Posted in 1 Geek, Software, Software Engineering, Software Testing, Technology, Uncategorized | Tagged , , , , , , , , , , , , , , | Leave a comment