12.20.2011

Securing the data

Whose fault is it when data is stolen? It’s rarely blamed on the programmers.

If a company executive leaves a laptop filled with confidential data in a taxicab, you probably wouldn’t blame a software developer. Instead, you’d presumably ask, why was that data on the laptop to begin with? I’ve often wondered why corporate executives have access to customer card information in the first place, and why security policies allowed such data to be downloaded to any end device, especially a not-locked-down laptop. But you wouldn’t blame the programmers.

If an unencrypted data backup tape disappears en route to a secure offsite facility, you’d yell at a sysadmin, not at a C++ coder. “Why wasn’t the data encrypted?” you’d want to know. “How could it be written in plain text?” That’s the fault of the backup software, or again, security policies – not programmer who wrote the applications whose data is being backed up.

Now, who do you blame when hackers violate credit-card terminals? My family’s local grocery store – Lucky’s in Millbrae, Calif. — was recently penetrated by so-called skimmers, who tampered with in-store card readers and grabbed up to 500 customers’ credit card numbers. As far as we can tell, our credit card wasn’t compromised – but you can trust that we’ll be scrutinizing the Visa bill extra closely from now on.

Certainly, you can blame Lucky’s, for not ensuring the physical security of those devices. But what about the back-end programmers for the grocery chain? How about the embedded developers of the card reader’s firmware? Or how about any number of applications that were involved, from the credit-card clearinghouse to the bank? Could programmers be anyway responsible? Could they have done something, anything, to prevent this incident?

The reality is, well, no. It’s unlikely, especially since the devices were physically tampered with. But even so, it’s impossible for programmers to anticipate every possible scenario, or to model every type of threat to a complex web of applications developed by many companies, and administered by many companies.

No series of locks and alarms can truly prevent a home from being targeted and robbed by criminals – or burned down by arsonists. And there’s an increasing awareness that there’s nothing that developers can do to sure today’s modern interconnected application 100%. But that doesn’t mean that we shouldn’t try.

Remembering John McCarthy

Computer scientist John McCarthy passed away in October 2011. In an SD Times end-of-year retrospective, cryptographer Whitfield Diffie wrote a personal essay focusing on McCarthy’s work on the creation of public-key crypto.

Diffie’s presented a different side of McCarthy, whom I knew mainly for his pioneering work on artificial intelligence and the LISP programming language. The only time that I recall meeting John McCarthy was at the AAAI conference in 1991, back when I was the editor of AI Expert magazine.

Shortly after our retrospective was published, we received a letter from an irate reader, Peter Schow, who insisted that our tribute was all wrong:

I was stunned to read the John McCarthy tribute in the SD Times December 2011 and not see his foundational contributions to functional programming, Lisp, and artificial intelligence (he invented this phrase!) mentioned at all! His book "LISP 1.5 Programmer's Manual" (1962) is a classic in Computer Science and is still very relevant today, especially as functional programming is undergoing yet another revival in languages like Scala and Clojure.

The title should have been entitled "Father of Lisp and AI" but instead the tribute appeared to be hijacked for the purpose of highlighting the author's public key cryptography invention. It's mildly disturbing that we are forgetting our history, short as it is, and you owe it to yourself to investigate the subject of John McCarthy's lecture when he received the 1971 ACM Turing Award. I'm shocked that your editors could have let this been published and I think you owe your readers a true memorial to John McCarthy.

John McCarthy is a legend — and his work in crypto is no less important than his work in AI. He deserves to be recognized for his contributions in both areas. I’d like to thank Whitfield Diffie and Peter Schow for sharing two views of this incredible man.

(You can read another view in the SD Times obituary of John McCarthy, written by Alex Handy.)

12.09.2011

Introducing AnDevCon III, May 2012

AnDevCon III – the third iteration of our Android Developer Conference – will be coming back to the San Francisco Bay Area from May 14-17, 2012.

We had some excitement scheduling this conference, as those who attended AnDevCon II in November observed.

At first, we scheduled AnDevCon III for April 2012 (and publicized that in the show program we gave away at the conference). But then in mid-October, Google announced that Google I/O – its two-day tech conference for all things Google, not just Android – would be on April 24-25.

“That’s not good,” we said. So our team went back to the drawing board and found some great dates in late June – and began gearing up for a public announcement.

But then Google decided that Google I/O needed to be three days – and they changed their dates to be June 27-29. Which happened to overlap the second set of dates we had selected for AnDevCon III.

In the words of the bowl of petunias in Douglas Adams’ “Hitchhiker’s Guide to the Galaxy,” we said “Oh, no, not again!” Back to the drawing board.

Unless Google changes its mind (again), AnDevCon III will be May 14-17, 2012. As before, the first day will be pre-conference workshops; the other three days will be filled with technical classes.

We are very excited about AnDevCon III. We aren’t anticipating many changes from AnDevCon II. However, there will be lots more coverage of alternative platforms beyond smartphones and tablets. Thus, you’ll see more embedded Android sessions, an expansion beyond the single class we had on the Android-based Google TV, and also a deeper dive into Arduino.

Mark your calendars, and we look forward to seeing you there!

12.03.2011

Why is video conferencing so hard?

Video conferencing is difficult. Whether you’re using a phone, tablet, desktop or laptop, there are challenges everywhere.

• Video conferencing requires that all participants use the same service.

Whether it’s Skype, Oovoo, FaceTime, AIM, Tango, Fring, Google Talk, WebEx, GoToMeeting, AnyMeeting or whatever, that means a plethora of accounts – and of course, not all accounts work (or offer the same features) on all device types or operating systems. In order to get advanced features many services want you to pay for a premium subscription. When you need multiple services to be compatible with your friends, colleagues or customers, all those subscriptions can get expensive.

• Audio and video quality is really spotty.

Recently I did a video interview using Skype – I was on a T1 line in California, and the other participant was using ADSL in Florida. The images and sounds kept breaking up, synchronization was terrible, and every so often one of us would lose the picture entirely. That means breaking and reestablishing the connection between our PCs. The experience was pretty bad.

Another time I did a FaceTime chat between my iPad 2 and an industry expert using an iPhone 4. Audio and video was generally outstanding, but at one point the call dropped and we had to restart. Of course, FaceTime is only available on iOS or Mac OS X.

• Multi-party conferencing is a nightmare.

Some services only allow two parties on a video. Some can handle up 6 users sending video; a few can handle more. But again, everyone has to be on the same system. With most services, only user needs to have a premium account, but on others everyone must have a paid account.

• Ease of use and functionality is spotty.

Each system requires its own user directory. The means for users to sign into and sign out of multiple user chats varies. It’s a mess. Recording video calls? Some services have that built in, others don’t offer it at all. Some only allow the host to record. The user interfaces are uniformly terrible, and documentation is worse. I’ve never found a scalable, cross-platform system that’s truly easy to use.

Imagine if standard wireline or mobile wireless calls worked like this. “Sorry, Bob, we can’t add you to our conference call because you use Verizon.”

We are used to calling any telephone number from any telephone. Doesn't matter if it’s mobile or wireline or even Voice-over-IP; doesn’t matter who the carrier is; doesn’t matter whether it’s CDMA or GSM. Whether you are calling Boise, Bangalore, Brussels or Brazzaville, if you know the phone number, you can make the call.

Yet with video, there’s too much manual handshaking required, and hoops that must be jumped through. Shall we use Skype or Google Chat? Oh, we can’t do AIM because I have a Microsoft Messenger account. We can’t use FaceTime because you have an Android phone. We can’t do video at all because your company has a firewall.

Picking up the phone is better than email. A video call is better than a phone call. And a video conference call is better than an audio-only conference call.

I can’t wait for this technology to mature and work properly.

Was Apple right about Flash?

As you may have seen a few weeks ago, Adobe is giving up on Flash for mobile devices, and is embracing HTML5.

Flash doesn’t run on Apple’s iOS devices. That’s not news, of course. Flash has never run on the iPhone, iPod touch or the iPad. This was a big deal several years ago, especially when the iPad was introduced – how could Apple claim that the tablet provided access to the Internet when so many popular Flash-based websites wouldn’t work?

Pundits declared Apple arrogant in its refusal to install a Flash runtime on its iOS devices. In early 2010, Adobe claimed that the Apple was creating a closed ecosystem with its portable computers, and that Flash was an open technology.

In April 2010, Steve Jobs fought back with his thoughts on Flash. He argued (convincingly in my opinion) that the opposite was true: that Apple, by pushing Web developers to use HTML5 instead of Flash, was on the side of open standards, and that Adobe’s Flash was a closed proprietary system. Jobs further argued that Apple had never seen a Flash implementation that performed well on portable devices.

Since that time – a year and a half – iOS’s lack of Flash has become less and less of an issue. As a consumer who owns both a iPhone and an iPad, certainly I found fewer and fewer websites that relied upon Flash. And although you can run Flash on Android devices (and I own both Android phones and tables), to be honest, this was never a significant reason for me to pick up an Android device instead of an Apple one.

In large part, that’s because developers of popular websites did exactly what Jobs predicted. They either moved away from Flash entirely, or created parallel graphics display systems that sniffed out iOS devices and offered them an HTML5 experience.

Once a website starts down that road, it’s nearly inevitable that the site will abandon Flash altogether sooner or later. I won’t miss it.

It looks like Adobe won’t miss it either. On Nov. 9, the company’s official blog wrote that “Flash to Focus on PC Browsing and Mobile Apps; Adobe to More Aggressively Contribute to HTML5.

The blogger, Danny Winokur, an Adobe vice president, wrote, “HTML5 is now universally supported on major mobile devices, in some cases exclusively. This makes HTML5 the best solution for creating and deploying content in the browser across mobile platforms.” Yes.

What’s the future of Flash? Adobe says that it is still developing it as a PC-based technology. Adobe has turned its Flex development environment over to the Apache Software Foundation. Flex has been open source software for some time, and while Adobe is expected to continue contributing to its development, Apache will now be calling the shots.

I wonder how long Adobe will hold out on Flash. As a desktop-only platform, it’s not very compelling. The future belongs to HTML5.

Picture-perfect software

A four-day weekend doesn’t mean four days without work, not in today’s modern economy. However, a holiday does offer a nice healthy opportunity for improving my life/work balance. Although I spent a lot of time in the office over this year's Thanksgiving holiday, it also meant time pursuing various hobbies – specifically photography.

On one level, photography is an artistic endeavor. It's little changed from my youth when I shot medium-format and 35mm film and developed the black-and-whites in my high-school darkroom. Composing shots, developing film, making prints and displaying my favorites was an analog, creative process.

Today, there’s obviously still an artistic aspect to photography. Taking closeups of butterflies at the California Academy of Sciences in San Francisco, or catching whales playing near Pacifica Pier, or shooting a fast-paced girls’ soccer game (AYSO under-14 – one of my best friends is the coach) is about having a good eye and feel for the photograph, not about having expensive technology.

Everything else about photography is about hardware and software, some embedded, some desktop, some living in the cloud.

• The proprietary microprocessor inside my Canon EOS SLR runs sophisticated firmware that couples the image sensor, auto-focus sensor and some AI to get the shot, while calculating exposure and stabilizing the image.

If you shoot RAW images (which I do), the bits are then transformed by codecs and then stored onto the camera’s memory card.

If you shoot JPEG images, there is more embedded software to apply creative post-shooting image transformations and real-time file compression before being written to the memory card.

Every so often, Canon offers firmware updates for my cameras, sometimes to fix bugs, sometimes to improve performance and sometimes to offer new functionality.

• My Mac accesses the same file system and copies the raw image files onto a hard drive. While the software there is not photography-specific, the operating system is essential for managing my digital images.

Adobe’s Lightroom software helps me manage all the photographs in my library – tens of thousands of them, all indexed in a super-fast metadata-rich database. Lightroom also contains tools for manipulating the images files, using efficient algorithms, and can exporting them in other formats – or even upload to online services.

• Occasionally I need to do more sophisticated manipulation of the images, and in those rare cases the tool-of-choice is Adobe’s Photoshop CS5 – which is not only incredibly sophisticated software, but which also has plug-in capabilities.

But wait, there’s more:

• I often share my photographs on Facebook or upload them to Google’s Picasa service. Both those services are built around massively distributed databases with strong backup. Facebook uses a map/reduce-based system for distribution of metadata about the photographs, letting all my friends, friends of friends, and others see the pictures, see if they’ve been tagged, read and add comments, recognize faces, and so-on. My mind boggles when considering the data/metadata infrastructure within these social-media giants.

• If I want to print the images, my Canon ink-jet printer also has some pretty advanced algorithms to transform the bits into the 10 specific colors of its pigment tanks, and implement dithering patterns to create even more apparent colors.

Have I written any photo software? Nope. But between embedded firmware in my camera and printer, desktop software on my iMac, and Web-based software in the cloud… there a lot to think about while setting up that perfect shot of Thanksgiving dinner.

About Me

My Photo
Co-founder and editorial director of BZ Media, which publishes SD Times, the leading magazine for the software development industry. Founder of SPTechCon: The SharePoint Technology Conference, AnDevCon: The Android Developer Conference, and Big Data TechCon. Also president and principal analyst of Camden Associates, an IT consulting and analyst firm.