Solid-state transistor-based receiver

DARPA out to break the 1 THz barrier for solid-state receivers

The Defense Advanced Research Projects Agency (DARPA) is on a mission to create the first solid-state transistor-based receiver that will achieve gain at frequencies of over 1 terahertz (THz). Earlier this year researchers came closer than ever before, making one that worked at 0.85 THz. Since the previous milestone was 0.67 THz, they can definitely say that they are making progress.

This is all part of DARPA's Terahertz Electronics program.  The goal of this well-titled program is to make high-performance integrated circuits that operate at frequencies exceeding 1.0 THz.

Just so you know, this is high up into the infrared region of the electromagnetic spectrum, where frequencies are well above the radio waves that mobile devices use. That region still gets use in the military though, in programs like DARPA's Video Synthetic Aperture Radar (ViSAR), which will be used by aircraft to perform accurate reconnaissance in overcast conditions.

This part of the spectrum is called the sub millimeter wave (sub-MMW) frequency band, operating above 300 GHz , where wavelengths are shorter than 1 mm, DARPA says. Making use of those frequencies to date has required "frequency conversion," that is, multiplying frequencies to make them manageable. But conversion carried complications of its own, such as requiring large-footprint devices to operate in those frequencies, DARPA said.

The Defense Department agency is aiming to use the frequencies for imaging, radar, spectroscopy, and some communications systems.

Even though civilians'  smart phones and tablets won't directly benefit from advances in the THz Electronics program, there's nothing to say that breakthroughs in mobile technology won't come from DARPA's advances in this area.

Posted by Greg Crowe on Oct 18, 2012 at 9:39 AM0 comments

Processsor future smart phones

Smart phone of the future: A chip in your head?

CNN recently posted an amusing yet thought-provoking article envisioning the development of smart phones over the next 100 years. The story culminates with the collapse of civilization in less than a century (from climate change), and mobile communications being reduced to throwing message rocks at each other.

That scenario might’ve made Stanley Kubrick proud, but the idea that most interests me is the authors’ prediction that, in 75 years, a microchip could be inserted into our heads that will allow us to connect directly with others through our brains, as well as to the Internet. While the writers are concerned about potential abuses from commercial advertisers, I can think of a few ways this technology would affect the public sector workplace.

First, security identification badges would become obsolete. Authentication for location access could be done with the brain microchip, and everyone would instantly know if another person was supposed to be there instead because his or her own chip would tell them so. Of course, for this to take place, security would have to be top-notch to stop intruders from using chips made to mask identity. Wow, that sounds like the plot to a great science fiction espionage thriller. You are welcome, 007.

The behavior modification scenario the writers propose would be unlikely to occur, I think, because the chip probably would not be connected to that area of the brain. But if it were, network administrators could finally make sure their painstakingly crafted security protocols are actually followed by everyone!

Although the writers project that this development is 75 years away, I think it might come sooner. We already have the technology to make a chip small enough to perform all of the necessary functions. The areas we need to improve in are biological rejection suppression and our understanding of how the human brain works. I’m guessing that latter will probably be the problem that delays us having chips implanted in our heads.

It’s all just guesswork at this point, of course, but if the writers are correct about a brain microchip — and wrong about the end of civilization — there would be practical uses for the technology. If nothing else, we could be assured that motorists at last would make only hands-free calls.

Posted by Greg Crowe on Oct 12, 2012 at 9:39 AM1 comments

A few ways to protect yourself against 'visual malware'

In light of the Naval Surface Warfare Center’s demonstration of PlaceRaider,  a malware app that uses a smart phone’s camera and sensors to build a 3D model of a location, smart phone users, particularly those in government positions, should be asking how they can defend themselves against this new type of "visual malware." Here are a few ideas:

First, be careful what you click. You might not think you are downloading malware, but that’s how 99 percent of malware gets onto personal computers -- you open an e-mail you shouldn’t, click on links you shouldn’t, etc. Seriously, just think before you click, people.

Second, you can disable your smart phone’s camera when not in use. You could do this in a few different ways. You could turn your smartphone off when you are not using it. Or you could physically obstruct your camera lens, such as with a piece of tape or by keeping it in a pocket instead of on your desk.

Lastly, you could regress entirely and get a 1G phone. This wouldn’t have a camera, or even a data plan that would enable you to click on bad links and get malware. Of course, most people would not want to exist at this level of functionality.

Given that these are listed in order of the impact they would have on the convenience of the user, I would suggest that the first one is the one we need to work on. Oh, and by the way, even though the Navy’s demonstrated this on an Android system, there is no reason to think that Apple smart phones wouldn’t be just as vulnerable, since the newer ones now allow programs to run in the background.

Posted by Greg Crowe on Oct 05, 2012 at 9:39 AM0 comments