Saturday, August 20, 2016

Lucee's Rebellion

We've all been there. Facing an elusive bug that just won't die. Over the last few weeks we have been waging a war with such a bug at work. I has easily become the most involved and confusing bug that I have worked on. That being said I wasn't even the first person working on the issue and others at the company has spent countless more hours on the issue.

So here is a simple rundown of the history of the issue. A few months back, sporadically we were seeing our sites go down with seemingly no explanation. Sites would be working fine and then you would get a 503 response back from the reverse proxy and accessing the application directly would just end up with the browser spinning. A restart of the server and all was well in the world but still, a site going down randomly with what seems like no explanation was not a good thing. When the issue was originally seen it only happened on our high-traffic QA sites and a few other devs machines. As time went on the issue just became more and more common to the point where all sites were going down fairly consistently. So we had a problem.


Once we hit this point where all sites were going down consistently we knew we had to do something drastically. We got a team together, locked them in a room, and hoped for the best. At that point we didn't know many facts but we knew it took a good amount of time to see the issue. Logging levels were upped, Jconsole run constantly, and heap dumps analyzed. There actually wasn't a whole lot of a difference between a healthy site and a broken site.


Intermission time for a brief explanation of architecture of this product:



 

Of course this  is an over simplification of the architecture but it is sufficient to describe the bug we were facing.  To explain simply; there is a Apache reverse proxy that sits in front of our backend services which does load balancing and other good stuff. One of the main services behind that load balancer is the one shown which is a Tomcat app server that holds a Lucee (ColdFusion Markup Language Engine) servlet where the majority of the code resides and we also introduced a new servlet recently that holds pure Java endpoints. This jersey endpoint was new to this release as we look forward to the future and read the writing on the wall that the ColdFusion community is dying. However enough about architecture.

Going into this issue we wondered if Jersey might be at fault but weren't able to confirm this until we ripped out all Jersey endpoints (or simply didn't call them) and the site wouldn't go down. Also, once we started investigating further we found out that almost exactly 3 hours after starting the server was when it would break. This led to a horrible feedback loop of having to wait for 3 hours after every change. I won't bore you with all the of the things that were tried and investigated but suffice to say that we narrowed it down to a threading issue of some sort. We were getting IllegalMonitorStateExceptions, ThreadDeath Exceptions, and using jconsole to interrogate the taskqueue in Tomcat of a bad site always showed the next thread in the queue was null for some reason. This led to writing our own versions of Java concurrent data structures, custom versions of tomcat, logging bugs to tomcat, yet we were never able to find an actual fix. Taking a step back to see what we had learned we decided again to take another look at Lucee. 


We had not been looking too much into Lucee because if you only exercised Lucee code you would not see the issue, only in the code path through Apache and Jersey would the issue be seen. Our chief architect found a location in the code that would kill threads if they were alive after a configured timeout and we decided to investigate there. Sure enough when we investigated the default timeout it was 3 hours! Three hours we had known was significant so this felt like a win and it was. We set the timeout to 3 minutes and sure enough, three minutes after exercising a Jersey endpoint the site went down. This really was the first time we had any control over this issue so it felt great. There was still plenty of confusion about why this was happening. The jersey call was definitely ending before that three minute timeout. And why was Lucee having it's hand in these requests anyway. We decided to log the stacktrace of the thread right before it was killed and it showed that it was in the "Parked" state. This confused us even further as a thread in a parked state definitely shouldn't be killed for a timeout since it ready and willing to do more work. This however did explain why we were getting in our bad state as this perfectly good thread was being killed by Lucee and more or less corrupting the taskqueue. IllegalMonitorStateExceptions and ThreadDeath exceptions also were explained by this. We added some temporary code that would check the stacktrace of the thread before killing it and if it was in the parked state not kill it. Sure enough this solved the problem. Fragile for sure but it did the job and told us we were on the right track.


Next problem to solve was that we needed to figure out why these threads were being tracked by Lucee and why they weren't being cleared out from the tracked queue when the request was over. This part of the process actually didn't take to long. Our chief architect is one of the contributors to the Lucee codebase and therefore had a fairly solid understanding of the places to look. He pointed us in a few direction and we attacked. As a way of understanding what was happening and trying to determine if the requests were actually making it into the Lucee servlet I decided to log out the stacktrace of each request that came in. Of course the majority of the request that came in were legitimate but we did see a few come in for a url Lucee:80 and when one of these came in it was a "kiss of death" for the site. We weren't sure what this URL was for and were sure we definitely weren't calling that URL. From this stacktrace we did get an explanation of why these threads were coming into Lucee even though they were requests to Java endpoints. From the stacktrace we could see that first it hit tomcat, no surprise, then Jersey handled it, after which it goes into Spring dependency injection which, (drumroll) instantiated Lucee. A little context and why that stacktrace is possible. In our move to Java we had some context that was only in ColdFusion and wanted to get it over into Java. In order to enable this, and since Lucee just runs on the JVM, we decided to use Spring to pass it between the two pieces. When Lucee boots up it loads the context object into Spring's application context and when Jersey needs said context it can retrieve it from Spring, it is a solution we are pretty proud of. Back to our previously scheduled rant. Once we saw this it hit us like a ton of bricks how this was happening. It hadn't occurred to anyone that when getting this context object it would create a request to the server. Each request comes into CFMLFactoryImpl and gets added to a concurrent map that is iterated over occasionally to see if the timeout has elapsed. So our request is being added to the map when the request comes into to our Lucee:80 URL but because its not a real request it never "ends" and therefore is sticking around forever according to the system. We noticed that these fake requests were conveniently of type HttpServletRequestDummy. With this knowledge in mind we extended the functionality of the system to not track HttpServletRequestDummy requests. And with this change of ~10 lines we had solved our elusive bug. Committed the code filed a pull request with Lucee proper and we were done.  As these things usually go, the majority of the time was spent in finding the root of the issue and a much smaller time actually fixing the issue. 


So what were my lessons learned from this bug. Honestly some basic stuff. Logs are crucial, we actually had a log of these timeouts but they were getting lost in the mix in a log file that is never read. Even if you think that a certain piece of code couldn't be the issue you could likely be wrong. Integration of various technologies can cause some crazy bugs. 


As you can see that there was nothing special that we learned. Just apply simple principles, helps solve the gnarliest of bugs. 

Saturday, February 27, 2016

Coding for Others

I have recently had the chance to reflect on some of the things that I find difficult and challenging in my job vs. my coding side projects. One of the things that has jumped out to me in this reflection is how you have to change what you are thinking about when you are in your work. One of the major differences being that you are always writing code for others not just for the problem itself. On these side projects, other than simply sharpening your craft and doing the best that you can, you don't need to follow any standard or best practices to get a job done. As long as the program does what it needs to do no one will be the wiser. In these situations I would call this coding for the code. Whereas when you are in a team working on code that will likely live on for a decade or more you are still coding for the code (or for the compiler) but you also must code with the next developer in mind. With the expert code sorcerer and the newbie both in mind.

Somewhere where I believe these two coding styles can be seen so clearly is in functional programming. I don't have a lot of experience with functional programming. Some Scala and using functional programming principles in other languages (Javascript, Java, etc) however, as an only somewhat experienced functional programmer I can see this when I look at a block of functional code. There are sometimes you look at some one-liner that just perfectly massages the data in such an obvious way but also gets the job done. These are the places where you get that rush of excitement from just reading some code. Then there are other times where you hit a one-liner that might as well be written in greek (that is, unless you read greek). And then when you finally figure out what it is doing it is not obvious and you realize the developer before you probably just wrote it that way as a way to get it to one line and not writing for the next developer.

Taking this "coding for others" concept further sometimes that is the primary reason we are writing the code that we are. I'm thinking of libraries and APIs. Sure these both do have behavior but they don't do anything without someone using them. And if you don't make it easy or enjoyable to work with your API no one will use it.  I have heard a few talks on API design and it is hard to have a sure fire process because I do believe it is very problem/domain specific. However I think it all starts with having those "others" in your mind as you code.

I will end with one short example. Last week I had the opportunity to participate in a hackathon at my work. The feature my group was working on was a two-factor authentication mechanism to strap onto our current login system. After only a little research we stumbled upon a service called Authy. Given that we only had a limited number of hours to work on this the team decided this was the way to go. And, as are famous last words often said, I shared with the team that "This is going to be easy." And looking back it was but in the next hours that is not what I was thinking. Now, granted, I think Authy had a thing or two stacked against them. The main one being that they were recently bought by Twillio and it appears they had migrated over some of their APIs or at least that some of the APIs had changed. This created a confusing landscape to test out this service. Thinking that their client helper libraries would make it dead simple to integrate we started there but about 400 obscure errors later and we gave up on that for a while. Finally one of my teammates found their Github page which gave us some insight on how to properly use the client library. Still couldn't use it entirely but at least got farther without errors. Then we found some limited documentation on how to integrate with just REST calls and we tried that out and learned a little bit more how to manipulate their API which allowed us to finally use it in production mode but we still could never get a proper response in dev mode. Only after finding a random person's blog that detailed how to use their dev API that we finally got it all working. Granted, next to the Dev API key it does say, "Contact us to learn how to use this" but I shouldn't have to. It should be simple enough that I can figure it out. Your errors should be descriptive enough that I know what is wrong. They really should have had that final piece of information on their documentation not on this random persons blog. But alas we got it done and all was well. I don't mean to rip on Authy as I believe we all have made products like this and Authy definitely has made more money off their products than me so we can see who the real winner is.

All this being said I guess the punch line is how I started. When writing code remember who you are writing for and the world will be a better place.

Wednesday, December 30, 2015

Feedback Loops

So it has been a long while to since I have posted to this blog. I will try to more occasionally post to this blog mainly for my own benefit. If anyone else in the world finds this interesting that would be great as well. Other than that it's just another tech blog. 

Recently I have had the opportunity to reflect on the effect the feedback loop between writing a line of code and seeing the result has on my development. The primary language used at my work is ColdFusion. This language allows a developer to experience the dichotomy of fast feedback and slow feedback. When making a change to most of the code there is an instantaneous feedback, as fast as you can refresh the page or run the test the is as fast as you can see the result. Whereas we have custom tags (basically a poor man’s web component) that requires a restart of the server to see the result. This restart can take 4-5 minutes at times. So what effect does this have on a developer.  Well as you can imagine, a tighter feedback loop leads to more frequent testing. I find myself making one line changes, testing, editing the previous line or adding a new line, testing repeating this cycle until I’m done. This cycle can be < 10 seconds at times.  I find myself writing “riskier” code, using printlining as my debugger, and generally a lighter feeling about my development. Contrast this to the long feedback loop where the risk is larger if a mistake has been made.  Being a dynamic, loosely-typed language there is nothing worse than going through the long restart process just to see there was a syntax error.  Code is more thought out, mental debugging is used more extensively, and there is a much “heavier” feeling about the development. The fun thing is, there have been numerous times where from start to end of fixing a bug the fast feedback loop has taken just as long as the slow feedback loop. This is because of the more intense focus and level that the slow feedback loop is held to vs the fast feedback code. Therefore, if I would have treated the fast feedback code with the same care as the slow feedback code, I could have finished much earlier with that issue. 

So where does this put us? What is the takeaway from this? I’ll be honest, I’m not sure what to take from this other than a continued desire to reflect on how I develop under certain conditions. Fast feedback vs slow feedback. Business application vs pet project. Statically-typed language vs dynamically-typed language. I believe there is much to learn in understanding how we change our development cycle under all of these conditions. The more we understand of these difference the more we can leverage them in deciding what tools to use and applying the lessons learned when using one form when using another.

Saturday, January 8, 2011

The Year 11111011010

Because of school, work, and other items taking up my time during the last few months I have not had a chance to post any new content in the last few months.  I will try to change that for the next few months (original I know, to apologize for not posting on a blog and promising to be better).  However I have a few brief thoughts about this previous year and trending topics in the last year so I will just make my comments in this blog post.




Thoughts From Computer Science Courses:






I am a computer science student at Utah State University and in this last semester have started some upper level courses in my studies.  Along with this has come more work with the organization and architecture of computer systems.  The more I learn about these systems the more it occurs to me that the device that I am typing on at this moment should not work.  There are too many things stacked up against it.  The amount of thought that has to go into even representing a floating point number and the problems that can be encountered when doing so are never considered by an average computer user.  However, parts of this low-level hardware and software is far more elegant than what we use today.  Where are the Donald Knuths or Alan Turings of our day?  Instead of having to have a deep knowledge and intuition for how to make these things work or come up with an elegant solution we now have 12 year olds in their garage with Python kicking out programs.  As is stated in the movie "Antitrust", "any teenager in a garage could put [you] out of business." When you lower the barrier for entry you let a lot more people in (profound, I know).  This is good because this allows someone with a good idea to capitalize on it and make millions.  Look at Zynga with Farmville and Cityville or Rovio with Angry Birds.  Now I'm not saying that the creators of these products are not smart or don't have a level of deep understanding.  They very well may, but do they need to now days?  And who am I to say, I'm not the one that is rolling in the millions like them.  Bringing this back to the subject let me say this.  Are the computer scientists graduating from USU or any university (this goes for Stanford, MIT, and the like too) really what is needed to be successful?  Which market is bigger, demand for computer programmers or computer scientists.  How many jobs will write in assembly?  Develop sorting algorithms? Or develop quantum computing? It's these degrees that set the computer scientists apart from the computer programmer.  Yes this knowledge of the low-level dealings and other algorithm analysis will definitely help when it comes to a computer scientist acting as a computer programmer but really is it needed?  So what if the 12 year old in his garage doesn't know how a merge sort works, how pipelining operates, or why his floating point numbers lose data?  Would the four years pursuing a degree really be best spent in industry gaining real world experience with how these things actually operate? That is up to you the reader.  Personally I find this shift in what is needed to be a great computer programmer fascinating.  Personally I vote to stay in school but I can see benefits from the alternative.










CES 2011






Ok so technically this wasn't part of the year 2010 but it just happened so I figured I would write about it while I still had it in mind.  This year I greatly enjoyed CES for the first time not because I actually went to it although it happened only a few hours from where I live, but because of CNET's awesome live coverage of the event so I would just like to start by giving them a virtual high five.  Just to talk briefly about some of the bigger things brought up or products announced.  First, 3D as a whole.  I expected 3D to make a big showing at the show this year and it definitely has.  Sony's whole press conference I swear was in 3D (makes for some fun 2D watching let me tell you).  The thing I was not ready for was consumer level 3D recording products.  This hadn't come to mind when thinking about 3D improvements but I find it very compelling.  I'm a little worried that quality will go down but that being said I like the way its moving and the technology it means.  The most important technology I think on these products is the glassess-free 3D.  This is where we need to get to.  Consumers aren't going to want to buy $150 glasses for each person that wants to watch their tv.  A good step that I saw from some manufacturers was passive glasses for the 3D.  This is a great step in my opinion.  Cheap glasses could make this very good and I think should tide us over while we wait for glasses-free 3D technology to further mature.  Another interesting feature that I saw a video on with a 3D TV announced at CES (Vizio's I think?) is a way to play a two player video game on one screen.  The way it works is that you play normally with your glasses on but instead of using the glasses to show you the 3D content it only shows you half the picture (a 2D image) of just your player, not a split screen of your player and your friends player.  Say goodbye to the days of screen peaking.  Also say goodbye to the days of having to play off a tiny screen because the image appears full-screen for you.  Moving on to more products.  Another huge class of device announced at CES was tablets.  I heard a report of 80 different tablets.  I think this may have been an exaggeration but still it gets the point across.  By quite a margin my favorite (as well as many others winning "Best in Show" by CNET) was the Motorola Xoom




 It is quite a beautiful device with a beautiful interface.  Kyle likes him some Honeycomb.  That being said there are some problems with it I see.  1).  The MicroSD slot will not work on launch and will be activated by a software update.  Really Motorola? Really?  You don't have software to run the hardware?  This remind me of updating my iPod Touch and finding out it had Bluetooth except worse!  To be perfectly honest I'm not sure how many applications will work without the MicoSD.  2).  Releases with 3G and then you bring it in to the store for a hardware and software update to 4G.  This isn't actually a problem.  3G is perfectly fine to release with if it gets it out the door faster but that being said.  I had one of those, "What the heck?" moments when I heard that it would need a hardware update too.  I don't know how that is even going to work to be honest, if you have to leave your device over a weekend or something that is not ok.  If they just hand you a new device that might be ok.  That's more of a problem for them though.  Another "tablet" I enjoyed was the Samsung 7 Sliding Series.  




This one is actually more of a convertible than a tablet but it is the first convertible that I actually thought, "I like".  That's how I think they should work.  The sliding mechanism is a little iffy but I like how it looks both open and closed and it really looks functional to me.  The problem I see here though is the OS.  Not because I don't like Windows 7 but because vanilla Windows 7 is not tuned for a touch interface.  Although at the same time I am not a big fan of skinning OS.  Oh well, guess you can't get them all.  There are actually lots of other very nice tablets announced at CES and even the Blackberry Playbook is sexy and functional.  




Never thought you would hear Blackberry and sexy in the same sentence did you?  Let's take a time out to talk about some devices that got a lot of buzz that I thought were pretty boring.  Up first the Casio Tryx




Everyone loves the way it spins everywhere but all that does in my mind is limit it's potential for hardware.  Big dud in my mind.  Next dud. The Razer Switchblade



This just made me think of the Optimus Maximus built into a laptop.  Big whoop.  I think a better way to go if you want to go for this style is the route of the Acer Iconia  also showed off at CES.  The final thing that was announced that I thought was fairly cool was Intel's new "Sandy Bridge" chips. 




Although it will be sad not to be able to laugh at on-chip graphics anymore.  These chips are hot and I want one!  I love my 3 year old laptop with its Core 2 Duo processor and it actually still does the job for me easily but that doesn't mean I don't want more. The part that made me a little sad was "Intel Insider" technology.  I like that it will bring more content in higher quality faster to people but I don't like what it means for consumers.  How am I suppose to explain to my mom that the reason she can't watch the streaming video in HD is because she doesn't have the right processor.  Sure its one thing to say the computer isn't fast enough but the right technology is a bunch of hogwash.  Hardware DRM is worse than software DRM it seems.  Even though Intel claims this isn't DRM.  Sounds like an accident waiting to happen in my opinion.  So that's my comment on CES. 




Google TV






Here I go again defending another beaten up technology and that didn't seem to help Wave in it's adoption (may it rest in peace in the great bit bucket of the sky).  I have heard many a person say that Google TV is dead.  To which I say, "Psh".  Ok so technically not a word but gets the point across.  While yes, Google did ask suppliers to hold off on announcing products at CES because they are adding improvements.  This didn't stop some manufactures from announcing it.  Also, it is a great product now and they are still selling it.  Google so graciously sent me a free one to test out and although it does has a bit of a learning curve I really do like it.  It's a beta product no doubt, but so was Gmail.  Any connected TV device is going to have a bit of a learning curve.  This is a major pardigm shift from the decades of people sitting in front of their computers and just sucking it in.  That is no longer going to cut it.  We interact with our media now days and in so doing we need a product to enable this for us.  I think that product is Google TV.  Look at the apps provided by TV manufactures.  I didn't even know Samsung Apps existed before CES and are they catching on? I've heard no compelling news saying yes.  Google is a high profile target and people want to see these targets fall.  Google TV is ahead of it's time.  I will admit that but that doesn't mean it won't be successful.  I challenge you to watch out for what comes in the future for Google TV.  I for one am going to enjoy it.  


Well that's all for now.  Enjoy the beginning of your year and I hope you have more content to read here soon. 

Sunday, July 18, 2010

Google to make Developers of Us All

Yes another Google post.

I recently returned from the Google I/O conference in San Francisco and one of the reoccurring thoughts that I would have is, "With the help of Google soon everyone will be able to develop."  All the products that they were coming out with from easier APIs to code that all you had to do was fill out a spreadsheet and copy and paste code.  I will admit, at first this bugged me.  I have always taken a thrill in possessing this skill to create something wonderful out of "random" lines of code and now here was Google coming along and allowing for any average schmo to do the same thing without knowing anything.  How could Google do this to all their developers?  This is how they show their thanks is by making developers useless?  I then took a look around me at the men and women I was rubbing shoulders with.  People from all across the globe, people who have written some great software and made huge changes and contributions to the industry.  I spent a night listening to a developer of Skype run a talk where a developer from the Facebook team was in the audience.  I sat at the feet of the whole Android team (literally).  The developers of Wave and every other Google product were all right there.  A man asking a question during one of the talks I later found out was part of the team that adapted Android for the Nook.  These weren't just your mediocre programmers.  I was humbled.   When humbled I believe we learn best as I did there in that room.  These developers weren't offended by Google "taking their job away."  They were singing Google's praises, but why?  It then occurred to me what Google really was replacing.  They weren't taking away all the programmers work, just the low-level grunt work connecting the pipes if you will.  Whereas it was some of this low-level work that I could do and what brought me joy this was not the same for these men and women actually in the industry.  They actually had customers that counted on them to improve their products and to keep pushing the envelope.  It was this low-level work that was slowing them down.  Google was allowing for them to spend more of their time writing the code that matters to the consumer, the parts that are industry changing.  Even after I/O Google continues to make development easier with the release of App Inventor for Android.  This application can either be seen as replacing the programmer skill or allowing for a lower barrier to entry for Android that allows for normal people to transfer to full-out Android development.  Although I was skeptical in the beginning I am now a believer and applaud Google in their efforts to assist developers in their job.  I will continue to wait anxiously for what Google does in the future.  

Wednesday, June 2, 2010

Why I Trust Google Over Facebook

The other day while reading about some of the newest Facebook privacy changes the thought occurred to me that this was all too much.  I have always been for being open and actually wasn't against allowing all the information that they were making public to be public, it was the way they went about privacy that got me worried.  They had opted me in to all of these new "features" (read money-making endeavors) without ever asking for my consent.  Along with this thought I was thinking about how Facebook had too much information about me and I really shouldn't be trusting it with all this information.  About then I received a text on my Google Voice number, I went and checked on my private Google Docs, went to Google Wave, updated a contact in Google contacts, emailed a friend on Gmail, played a game on my Android phone, uploaded some family videos to YouTube, and why not throw in this blog on Blogger.  Google has my life in it's hands so why do I trust them?

In the next few paragraphs I would just like to quickly go over why I do trust Google more than Facebook:

This may seem like a useless reason but I trust Google more because of their philosophy of "Don't be Evil." Sure any company could have this as their philosophy but Google has stood by this philosophy in all that I have seen of them.  They are a huge company yet with some of their major products they freely release the code (Android, Wave, Chrome, Chrome OS), can you ever see Microsoft or one of those companies doing that?  Facebook on the other hand seems to take the opposite view of the consumer.  I believe that when Mark Zuckerberg sees a user all he sees is a dollar sign.  This evil person is also visible in watching them say they are going to do one thing while do another.  They put on the face of a privacy advocate while releasing all this information to outside sources.  One sad but funny example of this was the recent bug with viewing other people's profiles and seeing their chats and friend requests.  This bug was just that, a bug, but nevertheless it just shows how a device to help privacy, being able to view your profile as someone else, can actually be used to destroy privacy.

Google is transparent.  Google will come out and say that they make money off of having people's information.  Facebook just takes your information and makes money off of it.  This transparency is also apparent in how you release information about yourself in these two services.  Facebook seems to be all for a "opt-out" approach which greatly hurts the uninformed and Google sides with a "opt-in" approach. The blaring exception to this was Google Buzz.  Buzz came out and opted you in to a lot of followers as a service and I truly believe that Google did not see a problem with this because through all their user testing internally that had worked out great.  As we saw that did not go over well and you may say that this shows that Google isn't so great.  All it shows is that they can make mistakes which everyone can but it was their response to this public outcry that is the impressive part.  They greatly improved management of your followers, showed you all that was being shared and had you ok it, and made it much easier to completely disable Buzz.

The controls that these two companies give their users is also an example of the difference in mindsets of these two companies.  Facebook, although it technically does give you a lot of controls, does just that, give you a lot of controls instead of a lot of controls.  No matter how many checkboxes you may have if the consumer does not know what is happening then your controls have failed.  Google with their dashboard I think has done a much better job with a much harder problem.  Google has all these different services yet somehow they can make them all easier to manage collectively than Facebook is alone.

There is some of my main points with the difference between Facebook and Google.  Will after all this I stop using Facebook?  Sadly no, that's where my friends are so that's where I must be.  Nevertheless, I will always feel more comfortable when I am in the Google world.

Do you agree? Disagree? Have counter examples? Have yourself heard in the comments.

Tuesday, March 16, 2010

Technology (Or Lack Thereof) That My Kids Won't Understand

I decided to take a step outside of my regular review-like posts of differing products for a much more laid-back, fun topic.  One night at work it occurred to me that in only 17 years the technology industry has completely changed, so much so that my kids won't understand any of it even if they were born now.  I have compiled a list of some of the things that I think they will never quite understand, if you have more ideas post them in the comments below:

1. The Sound of a Dial-Up Modem



I can still remember the first day that I got onto the internet, it was a warm summer's day and I decided for some unknown reason to boot up the old modem and get online, only problem was my dad was on the phone and I kicked him off the call, whoops. The scream of a modem was, for a long time, the sound of the internet for me.  The 56Kbps was perfectly fast and I didn't understand why you would want anything more.  Oh how far we have come. (Which reminds me, grandma we really need to get you off that dial-up)

2. The VHS Tape



These wonderful devices still take up a whole bookshelf in my house (who needs books when you have movies?). The ease of use was awesome and the fact that I had to put the tape in a "rewinder" (I can't wait to explain a rewinder to my kids) to get it back to the beginning seemed like a perfectly reasonable thing to have to do.  To tell the truth the DVD will be long gone by the time I have kids but I still remember the first DVD I watched.  The quality didn't seem anything special but when I saw the menu I was in utter awe.  How was that possible? I could look at these chapter things and pick one? Special features!? Shrek will always hold a place in my heart as the first DVD I saw.

3. Windows ME



Although many of these other items listed will always have a little nostalgic value for me, Windows ME is one thing I will be happy my children will not have to experience.  At the time I actually used ME I was oblivious to the horribleness I was being subjected too but now in my enlightened state I have been able to block the memories of my run-ins with the OS from my memory.


4. Internet Only When You are at Home or Work



My kids will never appreciate the fact that the internet wasn't always right at your finger-tips, but that you actually had to go find a computer to get it.  Now days from the moment I walk out my door I still stay connected through the internet on my phone and Wi-Fi hotspots dotting the town.  Continuing advances will only improve the speeds and effectiveness of this wide data network.

5. Cell Phones that Only Make Calls



Yes I realize it may be hard to believe but there is actually a phone in that electronic device that you carry around all the time to surf the internet and manage your data.  There are times with my own phone that I have so much running that my phone is no longer effective as a phone.  I use a grand total of like 10 minutes of calling every month and I don't see that growing any time soon.

6. Cassete Tapes



This item follows fairly closely with the VHS tape item.  For quite some time after the CD came out I even still used tapes extensibly because of their ability to easily create mixes of your favorite songs.  My kids will never understand the sadness that comes when you go to pull out your tape to see that the tape has gotten stuck and you ripped it or the requirement to turn around a tape to continuing listening.

7. Having to Have a Lot of Money to Make Videos that Large Audiences will View



I worry that my children will never quite understand what it used to be like to only see videos from high profile companies such as film studios and news agencies.  We can thank YouTube and other video sharing websites for this low barrier to entry into the video market.

8. When You Missed a Show on TV You had to Wait for the Reruns.



Not that I consistently watched many shows when I was a child but if I forgot to watch a show I was just out of luck.  Now, thanks to the beauty of the DVR, I never have to miss another show.  There is something extremely nice in being able to boot up the TV and see a list of new tv shows to watch.  This also makes watching tv much more efficient because I can fast forward through commercials and I only watch the TV shows I want and don't just channel surf.

9. Wires Needed to Connect Things



This one isn't quite complete but eventually it will be.  Sometimes I even forget that I can plug in an ethernet cord to get online.  Wireless is the wave of the future and hopefully my kids won't even have to worry about plugging in their electronics for power thanks to wireless electricity.

10. Blowing on Game Cartridges to Make them Load



There will always be a place in my heart for the lowly Sega game cartridge where it was very common to have to blow in both the console and the game to get it actually to load.  Sadly my children will probably never understand this needed CPR for game consoles.  I might just have to keep the Sega around long enough to show them.