Most government agencies and individual people have set up some form of wireless connectivity in their offices or homes. And if you’re like most people, you’ve probably noticed periods where the bandwidth on that shared connection slows to a crawl, even if the wired part of the network is performing adequately.
The reason could be that applications are hogging too much of the limited available wireless signal, far more than they actually need to. Qualcomm is trying to combat this problem using the cloud, some local intelligence at the router level and the same shared type of information that has been used in the past by other companies in other fields, such as anti-virus software.
To increase connection speeds, the company has unveiled a new service called StreamBoost technology for Wi-Fi, which will soon be making its way into routers and gear created by D-link and Alienware.
StreamBoost users would opt in to a service where their routers would take snapshots of various applications that are running, and then upload that data to the cloud to be analyzed. Qualcomm promises that the data would be anonymous, though oddly enough will still require users to sign up with a username and password.
Thoughts of Big Brother aside, the application will measure whether, say, a movie application requested a 20 megabits/sec pipe but only used 4 megabits/sec the entire time. Currently, most routers will simply give each application whatever it says it needs, so that lots of bandwidth can be allotted to what is essentially empty space in the airwaves.
Once enough data has been collected in the cloud by users, a picture of actual usage patterns will be assembled, and routers will be automatically configured to give each application only what it needs, not simply what it requests. Users will also have some control over what applications they feel should be prioritized on their individual routers, or can simply leave it up to StreamBoost to make sure that every drop of bandwidth is being efficiently assigned.
"Our goal at D-Link is to ensure each consumer has the best possible online experience," said Dan Kelley, associate vice president of marketing for D-Link Systems. "StreamBoost gives us a way to make sure every person using the network will have an optimal experience, regardless of application usage."
The technology is being demonstrated this week at the CES computer show in Las Vegas.
Now, part of me wonders if technology like this is really necessary. We should start seeing gigabit/sec wireless routers become mainstream once the 802.11ac standard becomes commonplace. But I suppose even if you have a huge highway, there’s no reason to weave across several lanes when you don’t need to do so. Also, usage tends to rise with the technology, so we will probably find a way to fill those huge gigabit pipes once we have them. After all, everyone said we would never fill a 10M hard drive when those first came out.
As for the technology itself, this is another example where the cloud comes to the forefront to make a sourcing type of application possible. Anti-virus companies have been doing this for quite some time, where users automatically report and upload the specifics of new viruses encountered so they can be analyzed, and in turn receive patches and updates to help combat those threats quickly.
Some folks might worry about what Qualcomm would do with the data, and the fact that you have to sign up with a username and password is a little troubling. Why make people identify themselves if you are then going to scrub that data to make it anonymous? Seems like an unnecessary step, and one that is sure to reduce the pool of people willing to use the service.
But if it works and does what it claims, it could eliminate wireless bloat, something that will only become an increasing problem as more agencies add wireless routers and more users cut their cords.
Posted on Jan 07, 2013 at 2:02 PM0 comments
It’s always great to see efforts to bring young people into the world of computers and programming. It keeps the public sector and industry alike from worrying about not having enough new people to carry the torch.
Along those lines, one of the most innovative projects in recent years has been the Raspberry Pi, a single-board computer that can be purchased for just $25. There is also a kit that is a little more expensive, but allows the entire computer to be constructed from scratch, sort of like a modern day Erector Set.
If only they had something like that when most of us were growing up. How cool would it have been to find a Raspberry Pi underneath the tree on Christmas morning?
The Pi is impressive because of its ability to bring young people into the world of computing -- the Raspberry Pi Foundation promotes it for teaching basic computer science in schools -- as well as the potential for parental bonding, but it’s also a surprisingly robust computer.
The Pi runs on a 700 MHz ARM processor that can be overclocked up to 1 GHz. The overclocking is one experiment you can perform with it that doesn’t void the warranty. It also has 256M of RAM, upgradable to 512M. To save money, the Pi doesn’t have a hard drive, using an SD card for booting and storage. You can easily attach a portable drive and use that as your storage medium if you want.
It can run on a variety of operating systems, including its own Raspbian, plus Debian GNU/Linux, Fedora, Arch Linux ARM, RISC OS and FreeBSD. Basically, you just attach a mouse, keyboard and monitor and you have a pretty neat little computer that you can fool around with, experiment on, or actually use productively.
The unit is selling quite well by all reports, but perhaps the biggest measure of success is that just before the holiday, the foundation announced the grand opening of the Raspberry Pi App store. The store can be accessed easily though the Raspbian OS or through a standard browser on a non-Pi computer. Most of the apps are free, though some have to be purchased. They include productivity suites, utility programs and even games.
This will add yet another element to the educational value of the Pi, encouraging users to program their own apps, which can then be distributed or sold in the store. In addition to professionally programmed titles like the “Storm in a Teacup” game from Cobra Mobile, I’ve already seen news reports about modern-day lemonade stand-type businesses being setup by kids who plan to create and sell their wares within the Pi community.
I’ll tell you the truth: I was a little skeptical when plans to create the Raspberry Pi were announced. But I’m really pleased that it’s doing so well. I wonder how many children will get a Raspberry Pi as a gift over the holidays, and how many of those kids will go on to become the great hardware makers and software programmers of tomorrow? Government has worried for years about a coming IT talent drain. Maybe something like the Pi can help. Good luck, kids! I’ll look for your apps in the store.
Posted on Dec 21, 2012 at 12:06 PM1 comments
The news hit the world like a thunderclap, or maybe more like the small pop you hear when you break open a cell of plastic bubble wrap. The Worldwide Web Consortium, the group that has been in charge of creating the new standard for about 10 years, announced Dec. 17 that work on HTML 5 was finished.
It’s kind of funny because I know programmers who have been working in HTML 5 applications for years now. Nobody seemed to want to wait for the consortium to actually finish its work. In fact, almost every browser running today, including Microsoft Internet Explorer, Mozilla Firefox, Apple Safari and probably the earliest true adaptor, Google Chrome, are already able to run most HTML 5 applications.
The announcement simply means that the consortium isn’t going to be adding any new features into the mix. What you have right now with HTML 5 is supposedly what you will get in the final run. It still needs to be tested and scaled, but the product itself is done. If all goes well, it will become the standard for all Web applications in 2014.
"The broader the reach of Web technology, the more our stakeholders demand a stable standard," W3C CEO Jeff Jaffe said in a statement on the consortium Website. "As of today, businesses know what they can rely on for HTML 5 in the coming years, and what their customers will demand. Likewise, developers will know what skills to cultivate to reach smart phones, cars, televisions, ebooks, digital signs and devices not yet known."
I’m sort of pooh-poohing the W3C for how long it’s taken them to get the standard off the ground, but in truth, HTML 5 will do a lot of things on its own that today is handled by many different programs. Rich content such as slide shows and videos will be able to run in the browser rather than requiring a plug-in to make it work.
For public-sector agencies, this will mean they can concentrate on one platform to make their websites secure, and are unlikely to get caught in a security hole caused by Java or Flash, both of which are frequent targets of malicious exploits, or any number of enabling languages being used to provide dynamic content today.
For users, it will mean a much cleaner computing environment on their desktops, tablets and smartphones. Almost everything will be handled by their browser of choice. It’s a win-win all around. It just can’t come soon enough.
Posted on Dec 18, 2012 at 2:04 PM0 comments
This year has seen a lot of advances in technology in general, and in government IT in particular, despite strong economic headwinds. Much of what was done in 2012 will act as a baseline or jumping off point for technology and trends in the New Year. Here are some predictions for what 2013 will look like in technology:
1. Mr. Roboto
I was surprised as anyone to find so many robots entering government service in 2012. But more impressively, the robots actually began to show signs of intelligence and autonomy, a trend that will likely break wide open in the coming year.
Some of the first signs of intelligence were displayed by the 501 Packbot, a workhorse military robot designed to defuse bombs and perform other thankless tasks without putting a human at risk. The latest upgrade for the 501s permits them to find their way back to their operators on their own if the radio guidance signal is disrupted. This may be a small step, but for a robot that’s never thought for itself, it’s a pretty big deal.
In the civilian arena, the National Oceanic and Atmospheric Administration used Wave Glider swimming robots from the Liquid Robotics company to explore the Arctic Ocean for months at a time without human intervention, gathering invaluable data on climate change.
But the king of the robots proved to be the X-47B stealth drone, a robotic airplane that is being trained to launch from an aircraft carrier, fly a mission and land again on the carrier without human guidance.
With successful trials of those three impressive robots showing various degrees of autonomy, expect many more to be deployed in the coming year, and for the intelligence backbone of all robots to experience significant upgrades.
2. What’s old is new again
Desktop computers will make a comeback in 2013, and no, I don’t think it’s 1994 again. GCN's mobile reporter Greg Crowe got a lot of flack when he said that the desktop market would experience a resurgence, but this is the guy who gets to play with every single tablet on the market. If he sees the limitations of tablets compared to desktops, then you can bet they exist.
In fact, I think many others also discovered this in 2012 as government moved increasingly to add tablets and mobile gear to the field. Tablets just don’t have the horsepower to manage many of the applications that work fine on desktops, and even more so when you consider multitasking.
On the other side of the house, there are amazingly powerful desktops like the Tiki from Falcon Northwest coming on the market that are only four inches wide and 13 inches long. It’s not much bigger than some tablets actually, yet it offers the power of Intel’s i7 chipset, solid state drives and the same GPUs used in today’s supercomputers. Moreover, it’s now all available for a reasonable price.
Slightly more mainstream is the ThinkCentre M92p Tiny from Lenovo, which offers desktop performance in a 5-pound package.
As more people get their hands on tablets, more people will begin to see their limitations, even as some elements amaze them. In 2013, I think the disappointments will lead to desktops coming back into vogue.
3. Touch me if you can
The release of Windows 8, and Microsoft’s decision to make the tablet and the desktop interface of the new OS identical, made a lot of news in 2012. While I like the new interface, it’s pretty obvious that it skews toward the tablet. On a desktop computer, users can manipulate the various windows and screens using a mouse and keyboard, but it would be so much easier and more efficient for most users if they had access to a touch screen.
Thankfully, adding a touch screen to a desktop setup is easy, whether you are talking about a monstrous 65-inch panel or a more typical display. It’s simply a matter of one extra cable going from the touch screen-capable LCD to the desktop. Couple that with the fact that adding touch-screen capabilities to a monitor no longer breaks the bank, plus the lowest LCD prices in years, (see our prediction about LCDs and LEDs below), and the time is right for one of the most underused pieces of technology, the desktop touch screen, to explode in popularity next year.
4. Trust no one
Although I’m pretty confident about a lot of the predictions on this list, this is the one I’m most confident about: Cyber attacks on government employees will increase in 2013. And I am not talking about the normal run-of-the-mill attack that tries to ensnare anyone and everyone in its net, or even those conducted with high-end hacker tools like the High Orbit Ion Cannon that gives anyone the ability to launch DDOS attacks at will. I'm talking about targeted efforts to get access to, or bring down, government information at all levels.
South Carolina officials found this out in 2012. One would think a state-run tax processing system would be a backwater target for hackers, but it became the front line in a hurry. A well-crafted phishing attack designed to look like a local money transfer caught an employee off-guard, who gave up login information to the hacker. Because the tax system, incredibly, had no secondary protection or encryption, the hackers had unrestricted access to thousands of Social Security numbers, credit card information and business tax forms, making almost everyone in the state a possible target for identity theft and fraud.
The state scrambled to implement tighter controls on the information, but the message was clear to hackers too, that phishing attacks, even at smaller state level systems, can reap huge rewards.
GCN security writer Bill Jackson rightly reports that browsers and defensive programs used by feds are increasingly getting smarter about thwarting malware and viruses, but defending against phishing attacks, especially targeted ones, requires user cooperation. Some money has to be spent educating users how to avoid the type of scam that entangled South Carolina. Otherwise, 2013 could very well be the year of the hack.
5. CRTs die, finally. I really mean it this time.
Surprisingly, when I wrote about the CRT market finally dying off this summer, many people rallied to the defense of these old workhorses. They claimed that CRTs had better refresh rates, higher resolution, better response times and could even be used to make toast in the morning. Only that last claim about the toast was actually true.
The one reason LCDs have not yet totally killed off CRTs is price. But in 2012, the bottom started dropping out of the LCD price market. This would have happened years ago had it not been for an “accidental” fire at a plant in Asia that was making most of the glass panels for them, which mysteriously caused prices to rise once again. But now they are being made and assembled all over the world, and many companies are selling them. So now it will take more than a single fire to pull LCDs back out of the commodities level segment of the market.
Even LEDs, which are just high-end LCDs with higher-performing backlight technology, are cheap now. When GCN reviewed the ViewSonic VA2451 LED monitor over the summer, we said it offered stellar performance and a 24-inch panel for just $215. That price would be considered a bit expensive just six months later.
Did you notice any of the crazy deals on LED TVs over the holidays? Sixty-inch displays were going for $899. More modest 32-inch TV displays were selling for under $100. And although high-end LCDs for use with computers are slightly more expensive, the technology is fundamentally the same, and units like the VA2451 prove that they are dropping their prices rapidly too.
Anyone, or any agency, that refuses to get rid of its radiation-spewing, heat spilling, eye-strain-inducing CRTs for more efficient, high-performing LCDs in this environment is just being stubborn or ill-informed.
Posted on Dec 14, 2012 at 5:29 AM0 comments
It was interesting to see that the X-47B, a military stealth drone, was undergoing sea trials onboard the USS Harry Truman. If successful, it will be the first robot to be able to launch itself and land on the deck of an aircraft carrier without human help. It could also become the first true robot in military service.
Most of the emerging robotic technology developments in the military involve something other than a true robot. Instead, we have lots of hybrid human and robot partnerships.
A true robot is generally defined as a machine capable of carrying out a complex series of actions automatically. So a bomb technician driving a machine to its destination and destroying some ordnance isn’t really using a robot -- he is using a Waldo, a term created by Robert A. Heinlein for a robot-like creature that is manipulated solely by a human.
My point is that the development of actual robots really is something special. Most robots are as dumb as a bag of rocks.
Case in point, about 10 years ago at the Comdex Computer Show, I was able to attend an event where real autonomous robots were put head to head in an arena to fight to the “death.” I thought it was going to be like the “Robot Wars” TV show, where metallic creatures fought each other to the delight of screaming fans. The difference was that on the TV show, the contestants drove their warriors. They were little more than weaponized RC cars. At the event I attended, the robots had to rely on their programming without any human help, thinking for themselves when fighting their opponents.
The fight was surprisingly boring. Each of the three robots in the arena was designed to scan its opponents and attack when it detected an optimal window, one where it could do the most harm while not taking any damage in return. Things looked promising when the saw blades started spinning and the pneumatic hammer on one robot thumped the ground like Thor getting ready to crush some heads.
But then something odd happened.… nothing. The robots rushed up to one another and just stood there. Occasionally one would twitch or back up a few feet, which would provoke similar reactions in the others. But none of them ever attacked. They couldn’t find the optimal time to do so.
One of the people running the event was standing nearby. After about 15 minutes of this boredom, I heard him talking about going into the ring and trying to get something moving. Ultimately, he wisely decided against walking into a battle pit filled with active, autonomous killer robots and trying to kick one. They probably didn’t know Isaac Asimov’s Three Laws of Robotics, after all. I think he would have lost a limb at the very least, and might have scored the top spot of all time in the annual Darwin Awards. Getting crushed by killer robots before a live crowd would have been hard to beat. The only downside is that YouTube hadn’t been invented yet.
That event made me acutely aware of the difference between a true robot and the kind I had seen on TV. Developments in the field of true robotics are impressive, which is why we should be in awe of the X-47B. Taking off, performing a mission and returning without human help is incredible. It raises new possibilities for what could come next.
Posted on Dec 13, 2012 at 5:30 AM0 comments
It’s getting pretty clear that we are nearing the limit of our wireless networking technology in its current form. And not just because of the speed of devices. More so, we are simply running out of space within the electromagnetic spectrum for radio waves.
We can inch into other frequencies, like gamma rays, but doing so without creating the Incredible Hulk, or just causing a lot of cancer, might be tricky.
But there is an entire spectrum that is used every day that computers have only begun to tap: visible light. A researcher named Harald Hass is working at the University of Edinburgh to enable computer devices to begin to use light to communicate, according to CNN. If his plans work out, we may one day dump Wi-Fi for something else, perhaps "Li-Fi," which formally is called VLC, or visible light communications.
Hass says that adding a microchip to a standard LED light can make it blink millions of times per second. Mobile devices with readers could then translate those blinks, essentially ones and zeros, into data. Adding an LED to mobile devices would allow communication in the other direction.
In the world imagined by Hass, every street light could become a high-speed Internet port. The human eye wouldn’t notice the difference. Much the same way movies appear to show solid, moving images because the frames are whizzing by at 24 frames per second, nobody would be able to tell the difference between a data-enabled light and a standard, always-on bulb.
Of course, there are some problems. Light communication needs constant line-of-site. Radio waves can travel through a lot of substances without a problem, but if someone walks between a Li-Fi device and a receiver, the communication is broken, not to mention the obvious fact that the signal couldn’t travel through walls, or anything that could dampen or stop light from passing.
There is also the potential problem of light pollution, especially in cities where this technology would be most useful. Lots of neon signs and non-communicating lights could interfere with the signal. And what happens if two hubs are close together, or a user is walking from one to the next? How will the handoff take place?
There is probably a ways to go on the idea of Li-Fi replacing standard wireless. But at least this technology can already be demonstrated in a laboratory setting. Could a move to the real world be too far behind?
Posted on Dec 06, 2012 at 8:27 AM2 comments