In one way, I was sad to learn that the federal government has no plans to build a Death Star in 2013. But I guess there is always next year. As a reporter covering emerging technology, I could generate a lot of stories about the technology behind the universe’s first Death Star.
The White House’s official response to the Death Star petition on its We the People site offers some pretty good logic for not trying to build one. That includes the fact that the cost of such an endeavor would run $850,000,000,000,000,000 ($850 quadrillion) and that the current administration is not in favor of blowing up planets. Then there is the well-known design flaw that would make it silly to build such a structure when it could be exploited and destroyed by a single-man spacecraft.
The official response goes on to point out several cool real-world programs that are being worked on right now by NASA, including the international space station and many of the robotic projects that we have reported on recently.
So there is no Death Star coming, which is probably a good thing, but the whole petition response has called attention to a rather dynamic program the White House started in which anyone can create an online petition, gather enough signatures and get an official response from the White House.
It’s a pretty innovative way to let people get in contact with government, and a lot of serious questions have been asked and then answered, from taxes and gun control to letting the director of the National Guard sit on the Joint Chiefs of Staff, which actually did happen.
Of course, the downside of the site’s popularity is that it inevitably attracts frivolous ideas, from deporting talk show hosts and college quarterbacks to officially recognizing Sasquatch as an indigenous North American species. The White House had protected itself against engaging in frivolous discussions by requiring that any petition draw 25,000 signatures to merit a response. But after the Death Star made the cut, the threshold was raised to 100,000.
Even so, this is a pretty cool attempt to make government more accessible using technology, even if a few silly ideas make it through. And really, the support for the Death Star petition will probably prove to be the exception among those offbeat ideas rather than the rule. If it encourages more people to participate in government, that’s good, even if we won’t be blowing up planets any time soon.
Posted on Jan 17, 2013 at 12:07 PM2 comments
The companies at CES this year seemed to be really focused on display technology, including actually showing some working OLED models. But beyond just the physical nature of the displays, the industry also seems to be obsessed with making sure we can communicate with our displays in innovative ways.
I’ve already predicted that the proliferation of Windows 8 is going to spike a demand for touch screens, even at the desktop level. But according to the companies at CES, everything from televisions and phones to cars and games are going to begin adding gesture technology.
We have a request in to test one of the first Leap Motion devices when they become available in a couple of months, since many Leap-enhanced devices could start making their way into public-sector circles. But there are many companies introducing gesture interfaces, or soon will be.
Something I hadn’t considered, though, was brought up by Computerworld editors after sampling gesture-controlled devices at CES. There are no standards. Going from one gesture-based input device to another now could be as different as speaking French or Chinese. Motions that do one thing for a specific device might do nothing at all with another, or worse, do something different that what you desire. Want to move a file from one folder to another? Sorry, you just formatted the hard drive with your left arm.
The problem is that some companies -- and I’m looking at you, Apple -- have tried to copyright a lot of gestures, such as pinching to zoom in or out of a screen (though U.S. patent authorities recently rejected that one, at least temporarily), or sliding to unlock a device.
The online site io9 points out that a whole lot of gestures are already copyrighted, and not just by Apple. That company seems to be the biggest gesture hoarder, even applying for copyrights on crazy three-fingered, twirling gesticulations that probably won’t ever be used, but which it doesn't want anyone else getting their grubby three fingers on. In a way it reminds me of the early days of the Web, when all the good, short addresses were being snapped up quickly.
Faced with the elimination of the more common gestures, companies have to get creative to make their new devices work while not stepping on the legal toes of someone else.
If we don’t want to be issued a hula hoop and jump rope in order to flip through our TV channels in the future — and really, nobody wants that — I suggest that the industry get together and form a gestures working group, like the IEEE has for wireless, to develop standards that everyone could use.
Gesture controls really could be a huge change in how people user their computers, whether at home, in the office or in the field. It’s easy to see how it could catch on in presentations or educational settings, as well as with mobile, and even desktop, devices. And it has potential in medical settings, such as operating rooms, where not having to touch a screen or keyboard is an advantage. But gestures would have to work in a uniform manner, like a mouse or touchpad does.
That would mean some companies would need to give up their patents on gestures for the good of the whole. The alternative is a gesture war that could stifle what could be one of the most innovative areas to hit computing in years. And if that happens, I can only think of one appropriate gesture as my response.
Posted on Jan 15, 2013 at 1:19 PM5 comments
OK, I really mean it this time. Last year I predicted that organic light-emitting diode monitors (OLEDs) would make it big in 2012, based on their huge presence at the Consumer Electronic Show.
But they didn’t even make a dent in the market. That’s not really a bad thing, as we were treated to a year of rapidly dropping costs for standard LCD and LED monitors, a trend that I’m sure will continue.
But I still want to see these amazing displays go mainstream. People don’t realize it, but OLEDs actually go back more than 10 years at this point. Their secret is that they use carbon (hence the organic part of the name) to create a paper-thin display. But for years, there were no viable applications for them, mostly due to the limitations associated with OLED technology. The Army even tried to incorporate them into wrist watches at one point, but that didn’t take off.
The problem with OLEDs up until recently has been that the screens die too quickly. Ten years ago, I saw them at CES when they were just sheets of lighted, colored paper. By the end of the show, they were very dim sheets of paper, having lost 20 percent of their brightness each day.
But this year, they are everywhere at CES, with everyone from CNN to Wired reporting about them. Not only are companies able to offer huge screens that are lightweight and about a half-inch thick, but the detail displayed is four times as great as the highest-end LED HDTV available today. And every company that makes displays has at least one new OLED model slated to come out this year.
The new OLEDs feature 4K resolution, a term that refers to the horizontal resolution of the screen. Depending on the product, the screens are reported to have resolutions from 3,840 by 2,160 pixels to 4,352 by 2,176 pixels, drastically higher than the 1,800-pixel horizontal resolution standard in current screens.
Now the biggest obstacle to widespread adoptions isn’t technology, but price. Especially for cash-conscious government agencies, it’s going to be a stretch to justify buying one. Although few at CES seem to want to put an official price on the OLEDs, LG told Mashable its 55-inch TV, due to arrive in March, will go for $12,000. Another vendor at CES told a reporter that a 24-inch model could cost that much.
While they're not likely to be widely adopted in agencies, at least until prices come way down, these high-res screens could prove useful in offices at the Defense Department, NASA and other agencies that could leverage the extreme detail in, say, satellite or aerial photos.
But at least they will hit the market soon. Rich movie stars can buy them for their home entertainment centers, and prices will eventually come down.
Meanwhile, these changes in the display market will have an effect on the rest of us. The nearly bottoming-out prices for standard LCD and LEDs is going to go down even further. Over the recent holidays, 60-inch displays were going for $899 and 32-inch panels sold for under $100. With OLEDs being the next big thing, prices for that older tech are going to drop even more. Be ready to scoop them up.
Posted on Jan 10, 2013 at 2:08 PM0 comments
Question: What happens to aging supercomputers when they are no longer found to be useful?
A little while ago I wondered if the United States, faced with a sagging economy, was set to lose the supercomputer race. Soon after the Energy Department captured the top spot in supercomputing with Titan, China announced it would try to regain the lead with Tianhe-2.
Oak Ridge National Laboratory’s Titan is capable of sustained computing of 20 petaflops, or 20 thousand trillion calculations per second. It will become an invaluable tool in modeling everything from bomb blasts to climate change. But to go beyond that speed, into the exascale range (1,000 petaflops), will cost billions. We may not have the money for that kind of investment at the moment, which is why I wondered if it’s even something we should pursue.
There were lots of comments on that blog, but I just wanted to add one more new piece of information to think about. This week, we heard a cautionary tale from the state of New Mexico, which has spent about $20 million to create and operate Encanto, which was the third fastest computer in the world back in 2008, capable of 127 teraflops.
So what’s in store today for the third fastest computer in the world five years ago? It’s going to be chopped up and sold for parts, effectively.
The Albuquerque Journal is reporting that the state can’t seem to make any money off the system, which was originally touted as an economic development and research tool, and has repossessed it from the non-profit organization that was running things.
Nobody seems interested in buying the system either, probably because of its inability to make money, and the fact that it costs about $1 million per year to maintain. So the state is considering dividing up the system’s computing racks among three universities. The system has 28 racks with 500 processors each, and bidding has already begun for individual components, though the state is still hopeful that a buyer for the complete system can be found.
So here we have what looked like a success story at first. Check out how happy officials seemed with Encanto back when it was built with this facility tour video.
Now, it’s worse than an albatross around the state’s neck. A financial investigation found that despite the $11 million original price tag and $9 million in continuing costs, the system is now worth a few hundred thousand dollars.
One could argue that a state government should never have gotten into the supercomputer business or that Encanto was mismanaged. It could even still prove a success story if somebody figures out something profitable and worthwhile to do with the thing.
But it is yet another warning to be cautious in running the supercomputer race. Supercomputers such as Titan are intended as research tools, not profit centers, but some like Encanto were intended to spur economic development. And New Mexico could lose millions. If the next generation of supercomputers are found similarly unsustainable, the costs could be much higher.
Old supercomputers don’t die. They don’t fade away. They get sold for parts.
Posted on Jan 08, 2013 at 8:01 AM1 comments
Most government agencies and individual people have set up some form of wireless connectivity in their offices or homes. And if you’re like most people, you’ve probably noticed periods where the bandwidth on that shared connection slows to a crawl, even if the wired part of the network is performing adequately.
The reason could be that applications are hogging too much of the limited available wireless signal, far more than they actually need to. Qualcomm is trying to combat this problem using the cloud, some local intelligence at the router level and the same shared type of information that has been used in the past by other companies in other fields, such as anti-virus software.
To increase connection speeds, the company has unveiled a new service called StreamBoost technology for Wi-Fi, which will soon be making its way into routers and gear created by D-link and Alienware.
StreamBoost users would opt in to a service where their routers would take snapshots of various applications that are running, and then upload that data to the cloud to be analyzed. Qualcomm promises that the data would be anonymous, though oddly enough will still require users to sign up with a username and password.
Thoughts of Big Brother aside, the application will measure whether, say, a movie application requested a 20 megabits/sec pipe but only used 4 megabits/sec the entire time. Currently, most routers will simply give each application whatever it says it needs, so that lots of bandwidth can be allotted to what is essentially empty space in the airwaves.
Once enough data has been collected in the cloud by users, a picture of actual usage patterns will be assembled, and routers will be automatically configured to give each application only what it needs, not simply what it requests. Users will also have some control over what applications they feel should be prioritized on their individual routers, or can simply leave it up to StreamBoost to make sure that every drop of bandwidth is being efficiently assigned.
"Our goal at D-Link is to ensure each consumer has the best possible online experience," said Dan Kelley, associate vice president of marketing for D-Link Systems. "StreamBoost gives us a way to make sure every person using the network will have an optimal experience, regardless of application usage."
The technology is being demonstrated this week at the CES computer show in Las Vegas.
Now, part of me wonders if technology like this is really necessary. We should start seeing gigabit/sec wireless routers become mainstream once the 802.11ac standard becomes commonplace. But I suppose even if you have a huge highway, there’s no reason to weave across several lanes when you don’t need to do so. Also, usage tends to rise with the technology, so we will probably find a way to fill those huge gigabit pipes once we have them. After all, everyone said we would never fill a 10M hard drive when those first came out.
As for the technology itself, this is another example where the cloud comes to the forefront to make a sourcing type of application possible. Anti-virus companies have been doing this for quite some time, where users automatically report and upload the specifics of new viruses encountered so they can be analyzed, and in turn receive patches and updates to help combat those threats quickly.
Some folks might worry about what Qualcomm would do with the data, and the fact that you have to sign up with a username and password is a little troubling. Why make people identify themselves if you are then going to scrub that data to make it anonymous? Seems like an unnecessary step, and one that is sure to reduce the pool of people willing to use the service.
But if it works and does what it claims, it could eliminate wireless bloat, something that will only become an increasing problem as more agencies add wireless routers and more users cut their cords.
Posted on Jan 07, 2013 at 2:02 PM0 comments
It’s always great to see efforts to bring young people into the world of computers and programming. It keeps the public sector and industry alike from worrying about not having enough new people to carry the torch.
Along those lines, one of the most innovative projects in recent years has been the Raspberry Pi, a single-board computer that can be purchased for just $25. There is also a kit that is a little more expensive, but allows the entire computer to be constructed from scratch, sort of like a modern day Erector Set.
If only they had something like that when most of us were growing up. How cool would it have been to find a Raspberry Pi underneath the tree on Christmas morning?
The Pi is impressive because of its ability to bring young people into the world of computing -- the Raspberry Pi Foundation promotes it for teaching basic computer science in schools -- as well as the potential for parental bonding, but it’s also a surprisingly robust computer.
The Pi runs on a 700 MHz ARM processor that can be overclocked up to 1 GHz. The overclocking is one experiment you can perform with it that doesn’t void the warranty. It also has 256M of RAM, upgradable to 512M. To save money, the Pi doesn’t have a hard drive, using an SD card for booting and storage. You can easily attach a portable drive and use that as your storage medium if you want.
It can run on a variety of operating systems, including its own Raspbian, plus Debian GNU/Linux, Fedora, Arch Linux ARM, RISC OS and FreeBSD. Basically, you just attach a mouse, keyboard and monitor and you have a pretty neat little computer that you can fool around with, experiment on, or actually use productively.
The unit is selling quite well by all reports, but perhaps the biggest measure of success is that just before the holiday, the foundation announced the grand opening of the Raspberry Pi App store. The store can be accessed easily though the Raspbian OS or through a standard browser on a non-Pi computer. Most of the apps are free, though some have to be purchased. They include productivity suites, utility programs and even games.
This will add yet another element to the educational value of the Pi, encouraging users to program their own apps, which can then be distributed or sold in the store. In addition to professionally programmed titles like the “Storm in a Teacup” game from Cobra Mobile, I’ve already seen news reports about modern-day lemonade stand-type businesses being setup by kids who plan to create and sell their wares within the Pi community.
I’ll tell you the truth: I was a little skeptical when plans to create the Raspberry Pi were announced. But I’m really pleased that it’s doing so well. I wonder how many children will get a Raspberry Pi as a gift over the holidays, and how many of those kids will go on to become the great hardware makers and software programmers of tomorrow? Government has worried for years about a coming IT talent drain. Maybe something like the Pi can help. Good luck, kids! I’ll look for your apps in the store.
Posted on Dec 21, 2012 at 12:06 PM1 comments