Three kinds of software innovation, and whether patents could possibly work for them
In connection with an attempt to articulate my views on software patents (more on those below), I was thinking about the different ways in which software development can be innovative. And it turns out that most forms of software innovation can, at their core, be assigned to one or more of three overlapping categories:
- Direct improvement in user interface or functionality. Examples (again overlapping) include:
- True UI enhancements.
- Application functionality that just lets you do more.
- Most modern mobile, web, and/or social software efforts, in which a relatively small amount of coding effort produces features that may or may not lead to rapid viral adoption.
- Ease or functionality not just for end users, but also for administrators. In particular, SaaS, cloud, private cloud and/or appliance benefits are commonly concentrated in this area.
- Languages and other programmer aids too.
- Performance/efficiency improvement. Overlapping examples include:
- Anything that directly purports to improve response time, hardware cost or utilization, or power/floor space consumption.
- Anything to do with parallelization or scale-out.
- Many, many under-the-covers enhancements to make data more protected (against theft or loss alike), user features snazzier, and so on. With a few exceptions – which are generally regarded as unsolved artificial intelligence problems – almost anything can be hacked together quickly in some high-level programming tool, assuming performance is of no concern. It’s getting the performance remotely right that can often slow market introduction.
- New or enhanced logical data model. Examples of innovation via data model – either truly new or else just newly implemented in a performant way — include:
- A huge fraction of application innovation, in “traditional” functionality and workflow alike. In several technological eras, just about everything about applications has been a commodity except the data model, but the data model alone was enough to provide long-lasting product differentiation. Indeed, it probably is true today, although that may finally change as business intelligence integration becomes a large part of application software technology.
- Most things that are called knowledge representation.
- Many things that are described by terms like “unstructured” or “semi-structured” data.
- Most innovations described by terms such as metadata management.
To check that I’m not being too glib here, let’s consider a few categories of software technology.
- MPP analytic DBMS are all about performance/efficiency improvement (whether of SQL queries or other analytics), except when they’re about ease of administration and the like.
- Hadoop is about scaling out cheap machines in a way that is (for some purposes) easy to program.
- The core of NoSQL is about efficient scale-out; easier programming also plays a big role.
- Disruptive small vendor business intelligence innovation has a lot to do with better and more useful user experiences, except when it’s about ease of programming and/or administration. The BI industry is also moving to in-memory analytics, which harnesses better performance to provide more interactive user experiences.
- SAS, which has long competed on the basis of superior functionality for statistical programmers, is now also on a big performance kick via MPP analytic DBMS partnerships.
- Oracle’s DBMS efforts have long been focused on performance and administrative usability.
- As noted above, enterprise application functionality is usually all about the data model. Exceptions arise when there is a major generation of UI functionality, such as interactivity (long ago), GUIs (ditto), or BI integration (in its early days now). SaaS is also pitched as an ease-of-everything play.
- Administrative tools are usually about making administration easier. In a few cases (e.g., backups), they’re more about performance.
I’d say my proposed trichotomy is holding up pretty well.
So what set me off on this line of reasoning? Well, Stephen O’Grady wrote
The reason I am against software patents is … very simple. … I am against software patents because it is not reasonable to expect that the current patent system, nor even one designed to improve or replace it, will ever be able to accurately determine what might be considered legitimately patentable from the overwhelming volume of innovations in software. Even the most trivial of software applications involves hundreds, potentially thousands of design decisions which might be considered by those aggressively seeking patents as potentially protectable inventions. If even the most basic elements of these are patentable, as they are currently, the patent system will be fundamentally unable to scale to meet that demand. As it is today.
In addition to questions of volume are issues of expertise; for some of the proposed inventions, there may only be a handful of people in the world qualified to actually make a judgment on whether a development is sufficiently innovative so as to justify a patent. None of those people, presumably, will be employed by the patent office. … Nor will two developers always come to the same conclusions as to the degree to which a given invention is unique.
In considering whether I agreed, I realized that the analysis is different for each of my three categories of innovation mentioned above.
- In the case of a logical data model, O’Grady is almost surely right. Many of those are just copied from the real world anyway, and hence don’t meet any kind of “novel and non-obvious” test. The rest are so general and abstract it’s really hard to say what – if anything – is new and non-obvious about them vs. well-established, often academic prior art.
- In the case of performance enhancements, the core ideas can usually also be found in well-established computer science publications. What’s more, the true innovations may be such simple algorithms that they’re not patentable. What’s left over is incremental enhancement. Once again, O’Grady is right.
- But the case of user interface/experience enhancements is not so clear. Inventor comes up with a useful idea for something that hasn’t been built before. Inventor builds and patents it. I’m not sure how that’s different from the case of building physical devices of various kinds, which have been patented for centuries. Determining what’s novel or non-obvious doesn’t seem to require specialized technical knowledge, at least not above and beyond that required in other disciplines.
Bottom line: There are many other reasons to oppose software patents, but Stephen O’Grady’s “It’s impossible to adjudicate them fairly” argument remains unproven, at least when it is applied to software enhancements whose essence is better designs for user experiences.
Related links:
- My negative comments about patents in the areas of MapReduce and columnar DBMS
- Three standpoints from which to view a software product strategy
Comments
5 Responses to “Three kinds of software innovation, and whether patents could possibly work for them”
Leave a Reply
The biggest problem with software patents is that, contrary to mechanical patents, it’s often the idea that’s being patented rather than the execution.
It used to be that some competitor came out with a great new product and you could put some engineers in a clean room for a month and tell them to build the same product, tell them nothing about how and that was ok.
For some reason, that isn’t ok in software patents.
Also, you could actually look at the mechanical patents, design around them and bring that new product to market in a non-infringing way. There’s no real way to do this with software patents either because of the broad and vague claims they’re allowed to make.
Overall, perhaps the ideal of the idea of software patents isn’t broken but any implementation of it seems fundamentally flawed to me. (Especially the ridiculous mess we have now.)
The example I always think of when I hear broad rejections of the very notion that software should be patentable is Dolby sound processing. The original Dolby A, back in the 1970’s was a huge step forward, eliminating the background of hiss on analogue magnetic tapes that a generation of listeners had taken as an inevitable characteristic of recordings. The follow-on Dolby B (or was it C?) made tape cassettes actually usable as the first portable music medium.
Both Dolby versions were implemented as circuitry – a couple of rack units for Dolby A, a clever circuit board for Dolby B. But at heart both were just digital signal processing algorithms. Today, you could implement them in software with little time and effort.
By traditional standards, Dolby sound processing was clearly patentable (and was, indeed, patented). But suppose DSP’s had been more advanced at the time, and the algorithms were implemented in software rather than by soldering parts together. Should that have affected their patentability? Why?
It’s interesting that the Bilsky case – now before the Supreme Court – returned to a much early approach for judging patentability, looking at “the kind of thing” that is being done in the patent. It looks positively on patents involving transformations to the real world, negatively on things that just manipulate abstractions inside the computer. Dolby software would probably have passed muster under the proposed Bilsky standards.
Two other things:
– At one point I was involved in reviewing engineer’s proposals for patent filings at a large corporation. One judgement we made was: Is this something visible from the outside, or is it purely an internal algorithm? We tended to reject the latter, because as a practical matter, it’s not useful: It’s too difficult to tell if someone is using your algorithm if there are no reasonably unambiguous outward signs. Many of the algorithmic patents out there that people raise such a ruckus about are of this nature. If they play a role in the real world, it’s usually as part of large portfolios that corporations lob at each other: Lawsuits get started over visible patents, and the internal algorithms then show up, or threaten to show up, during discovery. It’s not that they aren’t an issue – but given the realities, even if you’re a strong believer in software patents, you shouldn’t be too concerned about these kinds of patents. In the best of circumstances, they do damage without actually helping inventors.
– People have the feeling that the patent office is incapable of reviewing software patents and just accept anything. I had the experience of having a software patent rejected by the examiner four times based on prior art he found. The first time, what he found was actually pretty close, and it took some effort to show that what I and my fellow inventor were proposing was actually different. The second rejection was somewhat easier to deal with, and by the third and fourth the examiner was clearly in the realm of diminishing returns. Still … at least some software patents are now receiving significant, knowledgeable review. Anecdotal evidence, of course – but that’s what we have for all the bad examples of software patents as well.
— Jerry
Jerry,
Good points all — and not contradictory to mine. 😉
http://en.wikipedia.org/wiki/In_re_Bilski currently is a pretty good read. The parts on dissenting opinions outline the issues neatly.
Well, some of the issues. They don’t speak well to algorithm-gadget equivalence — i.e., the problem you noted in distinguishing among the patentability of multiple versions of the Dolby technology.
[…] I’m a skeptic about software patents. […]
[…] length about privacy/surveillance, and a little about some other areas, including net neutrality, patents, economic development, and public technology spending. Missing subjects include censorship (private […]