Mad Computer Scientists at VMWare Introduce Zombie IE6

I’ll admit to being somewhat of a Microsoft apologist, but as a web developer, I couldn’t help being a little giddy about the idea that the venerable Internet Explorer 6 finally seemed to be on the ropes thanks to the one-two punch of Google officially dropping support for the 10+ year old browser and Microsoft’s finally releasing an upgrade to Windows that was conceivably worth the effort to ditch XP. Unlike previous quixotic attempts to kill it, I think this time it might take.

Maybe it is a little premature to give the eulogy considering that by some reports IE6 still is still the second most used browser as recently as last month, but considering its accelerating decline in market share over the last two years, one can only hope.

In fact, one of the biggest IE6 holdouts on my company’s client list finally took the plunge and upgraded…to IE7. Hey, it’s at least it is progress, right?

Then I saw this little news nugget on the VMWare ThinApp blog and it gave me pause.

For those that didn’t follow that link, or didn’t make it past the odd “Web Apps are the new DLL Hell” prologue,  here’s the terrifying part:

I’m happy to report we have now have IE6 fully virtualized and working perfectly on Windows 7 32bit and 64bit.

Although getting IE6 to work “perfectly” on any version of Windows is a noteworthy accomplishment, my initial reaction to this information was not unlike what I’d expect to feel if I heard that someone cloning technology had been developed and Hitler was picked as the prototype.

"Sometimes Dead is Bettah"

Don’t get me wrong. I’m a big fan of application virtualization, and VMWare ThinApp has been instrumental in solving deployment problems for some of my applications in environments with rigorous security controls on employee workstations.

Despite how difficult it is to get someone on the phone at VMWare that has heard of ThinApp even though it has been almost two years since they acquired it, it really is a solid and easy to use virtualization platform.

You want technical support for thin what?
Are you sure you dialed the right number?

I suppose I can’t blame them for facilitating such an abomination. Keeping legacy apps alive for those who can’t bear to give up on them is, after all, a key use-case for an application virtualization platform. Like poor old Dr. Frankenstein, you have to give  mad-props for creating such an elegant workaround for  mortality, even if that workaround did maul a few villagers or their web-pages.

In fact, the ability to associate web-pages with different virtualized browser versions seems like a really cool trick… A really cool massively kludgey trick.

I just wish they didn’t sound quite so gleeful in their announcement. Can’t a fellow enjoy his schadenfreude in peace?

At least they could have made it difficult to accomplish. Even Doc Frank didn’t create an “Easy Button” for re-animating corpses, and he was allegedly almost as insane as James Carville.

Currently the process for creating a ThinApp IE6 package is a little complex…We know this is popular use case, so we’ve turned the process of capturing IE6 into a few clicks.


On the other hand, maybe I need a more optimistic perspective. I suppose an argument could be made that the availability of this technology could promote the use of modern browsers at curmudgeonly organizations.

For example, you could argue that it is safe to upgrade everyone’s browser because of application virtualization for those ancient internal apps that no one has budget to update.

Selling it to management shouldn’t be too difficult…

“Application Virtualization?” I’ve never used that buzzword before, and now that my grandmother has heard of “Cloud Computing” I need something snappier to put on my PowerPoint slides. That should just fit if I take out the cloud-with-a-dollar-sign-on-it clip-art.  Let’s do it!

SQL Injection Prevention Fail

Well, at least they know that SQL Injection is an issue….

I just hope for the sake of the customers who bank at Sacramento Credit Union that the programmers responsible for this web site aren’t relying on blacklisting certain strings and/or characters as the sole means of protecting their system from SQL Injection attacks, but I’m not optimistic.

Regardless, this is also a classic example of taking a programming problem and dumping it in the user’s lap. If I’m a user of this site I would definitely be thinking, “Thanks for the lesson in cyber security, propeller head. Now can I just get on to finding out my checking account balance? I don’t really have time to do your job for you today.”

SQLInjection Fail

Here is the highlighted text, in case it is  difficult to read in the image:

Why are the Security Questions used?
The first time you login and enroll in Protection Plus, you will be asked to enter five Security Questions and corresponding answers. The Security Questions are used if you do not want to register the computer you are currently using. With the Security Questions, we can make sure it is you logging in when you use different computers, such as, a internet bar computer.

The answers to your Security Questions are case sensitive and cannot contain special characters like an apostrophe, or the words “insert,” “delete,” “drop,” “update,” “null,” or “select.”

Why can’t I use certain words like “drop” as part of my Security Question answers?

There are certain words used by hackers to try to gain access to systems and manipulate data; therefore, the following words are restricted: “select,” “delete,” “update,” “insert,” “drop” and “null”.

Computer Alcoholism

Ignore for a moment the sins of grammar and the promotion of “Security Questions” to a proper noun with all initial caps due thereto. What in the wide world of sports is a “bar computer?”

Are they referring to those video poker machines at bars? Are they implying it is not safe to use those machines to do my banking?

Creating a unique constraint that ignores nulls in SQL Server

Proper constraints and data validation are the cornerstone of robust applications. They enforce explicit contracts between your database and the applications that rely on them reducing the number of contingencies that your application code needs to account for.

In this article, I address a specific variant of the uniqueness constraint that is a little tricky to implement in Microsoft’s SQL Server which has likely discouraged developers from applying this type of validation even when it is indicated in the logical data  model. I’m not aware of a standardized term, so let’s just call it a “Null agnostic Unique  Constraint.”

Null Agnostic Unique Constraint
A constraint that doesn’t allow duplicate values with the twist that Null values are not considered duplicates of other Null values.

A practical example
This table/design goal is a good  example of a situation where you might need a null agnostic unique constraint.

Customers Table
CustID integer, Primary Key, Identity
SSN varChar(11)
CustName varChar(100)

Validation Requirement: For privacy reasons, not every customer is required to provide their SSN, but the table should reject any duplicate values among customers with this field populated.

The Problem

The core issue behind dealing with nulls as part of a unique index is the ambiguity about what the expression NULL=NULL should evaluate to. In database systems NULL is understood to represent a value that is unknown or undefined. Is one unknown the same as another? That is a matter of debate.

Microsoft, by implementing the SET ANSI_NULLS option and defaulting it to OFF, seems to be taking the seemingly more logical position that you can’t say whether two unknown values are equivalent, and by default evaluates NULL=NULL to NULL.

ANSI, however, takes a contrary view in the SQL-92 standard, in which they specify that both NULL=NULL and NULL<>NULL should evaluate to false. This is further codified in their prescribed approach to the treatment of NULL values in the implementation of unique constraints.

If columns with UNIQUE constraints do not also have NOT NULL constraints, then the columns may contain any number of  NULL values.

Unfortunately, Microsoft didn’t true up SQL Server completely to this standard in their implementation of unique constraints/indexes. Regardless of the ANSI_NULLS setting, SQL Server treats Null as a discrete value when checking for duplicates. That is, it  allows at most a single Null value in a unique column.

Oddly, enough this implementation seems to imply the assumption that NULL=NULL evaluates to True. Further, it is a tad befuddling that MS decided to implement ability to allow NULL values in unique columns, which is optional in the standard, but ignored the hard requirement to not treat multiple NULL values as duplicates.

The bottom line  is that implementing a null agnostic unique constraint on our Customers table won’t be quite as easy as it might be with PostgreSQL or MySQL, which handle duplicate checking on NULLS in accordance with SQL-92.

Solution 1: Filtered Indexes

The first approach is clearly the cleanest and the most performant, but requires the use of filtered indexes which were not introduced until SQL2008.

Under this approach you simply add a unique index to the table on the field that you need to be unique, and specify that you only want to include rows in the index where that field is not null using the new WHERE syntax of the CREATE INDEX command.

Here’s how it would work using the Customers table example.

ON [Customers] (SSN)

Any values inserted or updated into the SSN column will be checked for duplicates unless they are NULL, because NULL values are excluded from the index.

Easy, huh?

Solution 2: Constraint on computed column

If you are using a pre-2008 version of SQL Server, it isn’t quite so easy, but here is an alternate approach that is a passable workaround. It should perform reasonably well, but has the downside of adding the clutter of an extra computed column that won’t make much sense to anyone else that looks at the table.

Here, we create a unique constraint on a computed field that forces the null values to look unique. It does this by evaluating to the value you want to check for uniqueness unless that value is a NULL, in which case it evaluates to a unique value to that row that won’t trip the duplicate check. The easiest way is to base the computed field on an identity field.

Here is the implementation of this approach, again using the Customers table.

CREATE TABLE [Customers] (
  [SSN] varchar(11) NULL,
  [CustName] varchar(100) NOT NULL,
                      THEN cast([CustID] as varchar(12))
                      ELSE '~' + [SSN] END

As you can see, the computed field will contain SSN, unless that field is NULL, in which case it will contain the CustID.

The ~ character is prepended to the SSN just to cover the remote possibility that a CustID might match an existing SSN.

Note: If you are using at least SQL 2005, you might be able to squeeze some extra performance out of this by marking the computed column PERSISTED.

Solution 3: UDF Check Constraint (Not recommended)

I’m mentioning this for the sake of completeness, but with the caveat that this approach could create considerable performance issues if used on any table with a significant amount of INSERT or UPDATE traffic.

Typically a check constraint is used for simple validation based only on values from the row being inserted or updated, but there is a loophole that allows you to access other rows in the table through the use of a User Defined function in the check constraint.

To implement this approach you would just write a UDF that checks for duplicates in the table for a specified value, excluding of course, the current row being updated.

There might be situations where this is preferable to the computed column constraint method, but I can’t think of one offhand. Use this approach at your own peril.

[Implementation omitted to discourage its use.]

2  ON [Customers] (SSN)

Tip: Use delayed service starting to speed up the boot process of your development machine

Waiting on the computerOne of the minor aggravations in my life, right behind lyricists who want to “hold me tight” despite all the homeless adverbs in the world,  is time required to get my Windows development machine from a cold brick to a state where I can do productive work.

This is exacerbated by the fact that I, as a developer,  tend to re-start my machine more often than a typical computer user. Also, I’ve got A number of heavyweight services than run on my development machine including web and database servers so I can work on my projects when disconnected from the mothership.

I’ve been doing my best to set the services that I use sparsely to manual and then start them only when I am ready to use them to minimize boot time. But today I found another option. While I was in services manager starting up a local SQL instance I noticed an unfamiliar value in the Startup Type column: Automatic (Delayed Start).

Automatic (Delayed Start) Service Startup Type

Delayed Start? What's that all about?

After some quick research, I discovered that this new startup option was introduced in Vista to expedite the boot sequence by de-prioritizing services that need to be launched at startup, but for which there is no hurry to get them spun up.

The gist of it is this: Services with this setting will be launched at the end of the start-up process and the initial thread is given a priority of THREAD_PRIORITY_LOWEST to avoid sacrificing UI responsiveness during the start-up sequence just to get things like “Google Updater” running immediately.

Some candidates for delayed start-up immediately come to mind:

  • Local development  instances of Database or Web Servers.
  • Updaters: Windows Update, Google, Windows Search, Any type of indexer.
  • Any of the crapware  from Apple or Adobe that they insist are so important they must run at all times.

Maybe I’m late to the party discovering this feature, but like many companies mine completely ignored Vista and are just now getting around to Windows 7  and in the process discovering a lot of nice “new” features that may have technically been around a while.

I’m not proud. I’m willing to admit my extended ignorance of this feature if it can benefit another developer out there.

Did you guys know about this? Anyone know of other nice goodies in Windows 7/Vista  that are especially handy for tweaking development machines?

Let me know in the comments!

Syntactic sugar can make your application fat and slow

Although some of us might not like the association, programmers are essentially just another class of end-user. Hidden beneath our masochistic penchant for vi and a command line, programmers are just as thirsty for cushy features that simplify their tasks and bend their tools around their own aesthetic preferences.

I’m not so much talking about GUI programming tools or code generators, Lord knows we hate those. I’m referring to those handy language features that don’t necessarily add any extra capabilities, but provide a way to write code that is more readable and usually more succinct.

For example C#’s using statement that not only lets you wrap your code in concise blocks around your IDisposable resources, it also cleans up after you just like your mother did back in the day.

 using (SqlConnection conn=new SqlConnection(connectionString))
 string MyNewPet= cmd.ExecuteScalar("SELECT TOP 1 Animal
                                         FROM Zoo"); } //thanks mom!

Of course you can accomplish the same objectives in your code without these handy shortcuts, but once you learn them you probably wouldn’t want to. Peter Landin, who put the Lambda in Lambda-Lambda-Lambda, coined the term ‘Syntactic Sugar‘ to describe these elements of programming languages.

A construct in a language is called syntactic sugar if it can be removed from the language without any effect on what the language can do: functionality and expressive power will remain the same.
-Wikipedia (Omnibus source of all human knowledge)

Great, now I have a name for something I already knew all about.
Thanks John, but I’ve got work to do…

A Rose By Any Other name

ASCII RoseNot so fast, I’m just (finally) getting to the point of this article. Syntactic sugar isn’t always just another entry point for the same functionality. As is true with most things, you can’t delegate the work without giving up some measure of control. Further it is dangerous to assume that the architect of the platform you use must be smart to have that job and probably is better suited to make those calls anyway.

They may know the platform better than you, but remember that they can never understand as well as you what you are ultimately trying to accomplish. Optimizing for an unknown problem requires making assumptions that may not hold in your specific usage.

By no means am I saying that you should avoid using syntactic sugar, just that it is prudent to understand to some degree what is happening under the covers to avoid nasty surprises.

Syntactic sugar can be more fattening than it looks

If you have been following this blog, you know that I recently completed my certification for SQL 2008. While brushing up on the newer features of t-SQL I came across a bit of  previously unfamiliar syntactic sugar, the WHERE CURRENT OF syntax.

In the context of an open updatable Cursor, it is a shorthand way to reference the current row

UPDATE MyTable SET TargetField=2 WHERECURRENTOF myCursor

instead of
UPDATE MyTable SET TargetField=2WHERE MyTable.ID =@FetchedIDOfCurrentRow

Rock on!  I mean, if you n00b enough to be using cursors in the first place, at least you can do it with style, right?

Well, yes. Until my profiling turned up some shocking results. As it happens, the WHERE CURRENT OF syntax performed consistently and considerably slower than the same code using an ‘equivalent’ WHERE clause.

If anything, I expected it to outperform the legacy syntax because it had the row ready and waiting for an update because I was explicitly communicating my intentions about which row would be updated in advance. So SQL should be holding that row at the ready for the update. Bzzzt! Wrong. Fundamentally Wrong. Couldn’t be wrong-er if my name was Wrong Wrongerstien.

Despite anecdotal evidence supporting my initial findings, I was still incredulous that my assumptions could be smashed so soundly. So I set out to get some hard data using the simplest version of  an update inside a cursor that I could muster:

 SET TargetField=@FetchSource+2 WHERECURRENTOF myCursor
     --WHERE ID=@FetchID  FETCHNEXTFROM myCursor
 INTO@FetchID, @FetchSource END<snip>

I tried a number of cursor types and indexing strategies, but in every case the WHERE CURRENT OF syntax was demonstrably slower, and the margin increased in a near linear amount as the  source data got larger.

The Query Execution Plans

I’m still dissecting the execution plans and will come back and post more once I understand the discrepancy, but the high level perspective is pretty telling.

Where ID=@FetchID Plan:
Clustered Index Update -> Update

Where CURRENT OF Plan:
9 additional steps including another Clustered Index Update, and 2 Clustered Index Seeks.

That’s all for now. Be careful out there and watch out for those empty code calories from syntactic sugar!

The Zen of Certification and the Microsoft Certified Master Program

The Revenant MCP

This week I became Microsoft certified this time by passing the SQL Server 2008 Database Development Exam (70-433) which bestows on me the privilege of calling myself a Microsoft Technology Specialist (MCTS).  I’m assuming this is the new and improved version of the Microsoft Certified Professional (MCP) certification that I remember from the last time I paid attention to these things.

Microsoft Certified XBOX 360 Acheivement

It has been almost a decade since I have devoted more than a passing interest in Redmond’s annoyingly mercurial certification programs. I fondly remember the feelings of validation, accomplishment, and relief upon completing the final exam of the MS Certified Solution Developer (MCSD) track one spring afternoon in 1999. I rode that high for several months before the e-mail arrived from Microsoft giving me the “good news” that I hadn’t finished a race, I had only reached the front of the treadmill.

Dear {Insert Name Here},
We think all of you who busted your hump to get your certification are swimmingly awesome! So awesome, in fact, that we are doing you a huge favor. We are going to make your certification even more marketable to employers by upgrading the program. Well, not exactly YOUR certification, but the one you would have if we hadn’t just put an expiration date on all the tests on your transcript.
Isn’t that just marvy?

The realization that certification was a journey and not a destination was a bit of a buzz-kill for me, to say the least. I started to ruminate on my motivations for becoming certified in the first place and ultimately concluded that it was primarily an effort to credential myself for career advancement.

Is it worth it to continually renew these certifications every time MS incremented the annum on their software?

What is an expired certification worth anyway?


As the tests that comprised my MCSD were gradually retired, and I assume my certification status along with it, I began to wonder about how to document my situation on my résumé.  Sure, it was expired, but it doesn’t it mean anything that I ran this gauntlet, even if I did it in last year’s chariot?

Sure you summited Everest, but what have you done lately?

It seemed only fair to continue to mention the cert on my résumé because, after all, I had earned it. I considered adding a qualifier (expired), but that felt like a stain that screamed “I don’t keep up with technology!”

But that wasn’t true. I was keeping up.  I just couldn’t justify the cost and effort to continually re-take the exams. So I didn’t, until now.

So why now?

The whole reason I am even thinking about MS Certifications again is that my employer realized that they were perilously close losing their MS Partner status and the associated software discounts.

They desperately needed to affiliate themselves with some certified SQL experts to meet the requirements of the program, so I volunteered to give up a few evenings making sure I knew in how to properly  punch myself in the junk using the new features in SQL 2008.

The extra effort to brush up apparently paid off because I scored a respectable 925 (of 1000) on the exam despite the fairly extensive and complex content of the exam that often ventured beyond the features of the platform relevant to developers.

Advice for those taking the 70-433 exam

Microsoft SQL Server 2008 Database Development Training KitThe primary resource I used to prepare for this exam was the MCTS Self-Paced Training Kit (Exam 70-433) from Microsoft Press.

I found the book did a pretty poor job at explaining some of the material and went in to way more detail on some topics than was necessary to prepare for the test. I consulted to various blogs and MSDN articles when I found the explanations from the book too convoluted  or terse to follow (i.e. frequently).

For example, the section introducing Common Table Expression uses an unnecessarily complex query involving multiple joins that made the example too noisy and harder to mentally parse out the syntax of CTEs.

The true value of the book was the framework it provided for studying for the exam than the narrative in the lessons., but the included practice exam, which was way harder than the real one, and the included 15% discount code for the exam registration fee were nice extras.

Some general tips to prepare for this exam:

  • Focus on the new stuff: New features/changes in 2005/2008 are disproportionately represented on the exam.
  • Hands on Practice is critical: Implement at least one example of each concept against a database you are familiar with, not just AdventureWorks.
  • Take the practice exam early: Even if you do horribly, it gives you a list links to MSDN articles for the topics you need the most work on. I wish I had seen it earlier.
  • The practice exam is MUCH harder than the real one: I never broke 65% on the practice exam, but still got 92.5% on the real one.

OCD (Obsessive Completionist Disorder) Takes Root

A few days after passing the exam, I get another of those congratulatory e-mails from Microsoft with a link to the special MCP site where I can see all the perks associated with certification.

That’s all fine until I notice the “Certification Planner” link that informs me that I am one measly test away from the “Microsoft Certified IT Professional” level.MCTS is for schmucks who can’t commit.

This “you are on step 3 of 4” type marketing scheme, is a particular weakness of mine. It is the sole reason I am completely addicted to Mafia Wars, despite the fact that it is a completely pointless game with no action or any real strategy to it.

Mafia Wars

27% ??? That Just won't do!

This is exactly why I’ll probably take that other test regardless of the fact that it really provides very little, if any, additional wow factor to my resume.

Somewhere in the Monk area of my brain, I just know it is a terrible thing for me to be 50% of the way to a MCITP certification for SQL, despite the fact that was perfectly happy at 0%.

What’s the harm anyway? My employer will probably pick up the tab for the exam fees and preparation materials and I always learn something new while preparing for these. Right?

Then I stumbled upon this…

The Microsoft Certified Master (MCM) Program

… crap …

The two tests for the MCITP certification along with 10 years of IT experience and 5 years of SQL experience make you eligible to apply to the mother of all  Microsoft certifications (MOAMC).

Interesting, tell me more…

I need to send a resume and get approved to even try for the certification?
That sounds super exclusive, cool!

A mandatory three week training program on-site in Redmond?
That might be a tough sell for my boss, but it would be cool to get a peek inside MS HQ.
Act now and get a $3,550 discount on the registration fee.
Wait a second, how much is that registration fee exactly?

Registration fee: $18,500.

Ready to Become A Master?

This MCM ad is a "save the queen" away from being an Evony ad.

Yeah. I don’t think my boss is gonna go for that, even with the helpful ROI calculation they link to in the FAQ. The boss does ROI calculations too and knows the smell of his own BS, and will probably recognize the scent of someone else’s.

So I am gonna be on my own dime and use all my days off if I want to pursue this? Or perhaps I could spend the same money and get an MBA at a mid-level school, which has a much better chance of increasing my earning potential and won’t expire when SQL 2010 is in vogue. It seems kind of like getting a phD in VCR repair.

The comparison to a college degree doesn’t stop there. Tell me this doesn’t sound eerily similar to a Master’s thesis:

The time it takes varies, depending on the candidate. However, the estimated time for fully qualified architects to prepare their documentation and prepare for the Review Board interview process is typically 80 to 120 hours, over a period of three to six months

Who is getting this MCM Certification?

It does appear that this is a pretty exclusive club based Microsoft’s fluff-laden marketing-speak description of the corpus of MCX recipients.

Worldwide, more than 300 Microsoft Certified Masters (MCMs) and Microsoft Certified Architects (MCAs) specialize in specific technologies, and more than 125 specialize in infrastructure and solutions. Those who hold these industry-leading certifications live and work in many countries and regions, including the United States, Europe, Latin America, and Asia Pacific. All have varied backgrounds, interests, and extensive experience.

In fact, on the very same page it appears to list every holder of this certification, which gives some interesting insight into the target candidates for the program.

Just from an eyeball estimate, it appears that MS employees comprise around 80% of the people who have this level of certification. I’m assuming they get a substantial discount beyond the $3,550 special.

For now, I’m gonna have to put this on the “Fat Chance” wish-list, and settle for the MCAD program or something more economical.

What do you think?

Is this certification the least bit enticing to you?
Have you even heard of the MCM or MCA programs, and if not is it worth the money if you have to explain it to a potential employer?

Someone at HughesNet must be reading my blog

Apologies in advance for another off-topic post on the Blog, but  just had to share this update on some of the complaints I leveled against HughesNet back in my May article “The end of HughesNet as we know it, and I feel fine)”, mostly regarding their FAP policy for controlling bandwidth usage and the poor implementation thereof.

If I didn’t know any better, I’d swear someone over there must have read what I posted and did something about it. Amazing.  Granted, the invitation did appear to be mass mailing, but given how closely the new program lined up with my complaints and suggestions I still have to wonder.

So here’s the skinny…

Background (if you didn’t read the first article):

If you go over your daily allotment of 200-400MB/day, depending on how “premium” your individual plan is, HughesNet puts you in FAP (Fair Access Policy) mode that throttles your bandwidth for 24 hours. My testing indicates that the throttled speed varies somewhat, but is in the neighborhood of a 14.4K modem speeds. That is pretty much useless given the average size of a just about anything on the Net worth looking at. No appeals, sorry Charlie!

Complaint 1: After quite a bit of backlash, they finally provided a way to check if you were in penalty mode, but made it impossible to determine how close you were hitting the threshold. By being secretive  about the exact calculation for what puts you in FAP, the only way to know what you have left is to go over. Then it is too late.



Looks like definite progress. The fine print elsewhere on the beta site indicates, however, that the initial version is only going to tell you if you are in FAP or not, which you can already find out if you know what you are doing. Also, it looks like the planned versions that tells you how close you are to the limit will only be available on HN9000 modems. But don’t take my word for it…

And, with an upcoming software update, HN9000 users will be able to see their remaining allowance in real-time, so you’ll always know exactly how much data you have left.

Complaint 2: The policy is supposed to be about stopping bandwidth hogs, but the limits are so low that even casual users are likely to trip them. Once it is tripped, you are going to wait the 24 hours with no recourse. Tech support told me on more than one occasion that they didn’t have the capability to reset the meter for anyone. Even if the FAP was accidentally triggered by something like a virus or accidental Windows update running early they were powerless to help you. Once in a fit of desperation I even offered to upgrade to a higher plan if they would remove the throttle early, but no can do. Even if you buy the super premium plan, more than 500MB a day is ain’t gonna happen.



It looks HughesNet users will soon be able to get their hands on indulgences in the form of tokens that get you out of the penalty mode immediately. According to the beta announcement each user will get one of these free every month as a get out of jail free card for accidental bandwidth “excesses.” For more chronic offenders, like myself, more will be available for purchase. These tokens can be redeemed to reset your cap for the day and get you out of FAP, if necessary. Additional tokens are going priced according to your service plan as follows:

Plan Price per Token
Home $5.00
Pro $7.50
ProPlus $10.50
Elite $12.50
ElitePlus $12.50
ElitePremium $12.50

The next time I have an urgent work project and hit the FAP limit this will definitely be a preferable to scouting out a coffee shop with Wi-fi. Again, in their own words…

By redeeming a Restore Token, your Daily Allowance is immediately reset, returning you to full speed. Your Download Allowance is not changed by purchasing a Token, and this fresh Download Allowance does not roll over into the next 24-hour period.

Other Items

There are a few other announcements in the e-mail that sound mostly like re-packaging of what they already offered, including a FAP-Free window from 2am-7am where your usage doesn’t count against your daily limits. However, they are providing their own branded download manager to help you schedule big downloads for that window, which also ameliorates the situation somewhat.

The Bottom line

These are very positive steps and show that maybe Hughes has finally decided that they care about keeping customers, or at least are worried about the types of things I predicted in my previous article on the topic. In any case, it sounds like it will make things a little more tolerable for those of forced to use satellite Internet.

Ultimately however, they are going to have to find a way to raise the bandwidth caps by at least an order of magnitude if they hope to compete anywhere except the captive market they have now, i.e. rural customers with no other option.

If you go over your daily allotment (200-400MB/day depending on how “premium” your individual plan is), HughesNet puts you in FAP (Fair Access Policy) mode that throttles your bandwidth for 24 hours to around 14.4 K modem speeds (based on my own testing). The