Human real-time
I first became an analyst in 1981. And so I was around for the early days of the movement from batch to interactive computing, as exemplified by:
- The rise of minicomputers as mainframe alternatives (first VAXen, then the ‘nix systems that did largely supplant mainframes).
- The move from batch to interactive computing even on mainframes, a key theme of 1980s application software industry competition.
Of course, wherever there is interactive computing, there is a desire for interaction so fast that users don’t notice any wait time. Dan Fylstra, when he was pitching me the early windowing system VisiOn, characterized this as response so fast that the user didn’t tap his fingers waiting.* And so, with the move to any kind of interactive computing at all came a desire that the interaction be quick-response/low-latency.
*That was well put. Unfortunately, VisiOn didn’t meet Dan’s standard, which is a big part of why VisiCorp wound up on the ash heap of history.
Once again, we’re in an era that features:
- A move from batch to interactive computing.
- Users’ desire for zero-wait interactions.
The two big examples I have in mind for a batch-to-interactive trend are:
- The replacement of batch data warehouse loading by continuous feeds.
- More generally, a movement to integrate short-request and analytic processing.
My top examples for zero-wait interactions are:
- “Speed of thought” business intelligence.
- Anything to do with consumer web page response times.
Let me be clear about confessing something — I’m conflating two different kinds of low latency, namely database freshness and user interface response. My two main reasons are:
- If you want to make decisions based on fresh data, you probably don’t want to take a long time making them.
- If you care enough about an analytic problem to repeatedly query a database, then you probably would like the database to be as fresh as possible.
I’ve been conflating those two things at least since I first came up with the speed of a turtle vs. speed of light analogy.
But how should we refer to more-or-less-immediate computing? The term “interactive” has long been played out. “Real-time” has definitional issues, as captured in the Wikipedia passage:
Real-time programs must guarantee response within strict time constraints. Often real-time response times are understood to be in the order of milliseconds and sometimes microseconds. In contrast, a non-real-time system is one that cannot guarantee a response time in any situation, even if a fast response is the usual result.
The use of this word should not be confused with the two other legitimate uses of real-time, which in the domain of simulations, means real-clock synchronous, and in the domain of data transfer, media processing and enterprise systems, the term is intended to mean without perceivable delay.
Similar problems adhere to a term I nonetheless sometimes use, namely “quasi real-time”.
The Sumo Logic guys propose an interesting alternative: human real-time. Billy Bosworth recently emailed me with a similar idea, from a conference panel that obviously struck a nerve. I like it, because it conveys the impression:
- Effectively real-time from a human perspective …
- … but not necessarily from a machine standpoint.
So am I overlooking some drawback to the term? If not, I’m going to start using “human real-time” to mean something like fast enough that humans don’t perceive an annoying lag.
Comments
21 Responses to “Human real-time”
Leave a Reply
I think that you are onto something very valuable here Curt, and was immediately struck by the term when I saw your original tweet.
I think it captures the sense that ‘real time’ is actually where humans live, and in general they don’t want anything to impede the flow of what they are doing – this would include the need for sub-second response from a TP or developer toolkit to aid productivity, or through to ‘train of thought’ BI. In each case we just want to have a conversation with our data / system and not have to be interrupted by delays, just as when we converse with someone we don’t want to have gaps of hours or days in-between what we are talking about.
Think the speed of turtles and speed of light analogy is great – what it sparked in my head was that what mattered was the difference in pace between the real world process and the data (input / query) process. Where these are in sync , whatever the time frame, we have the perception of real time working. Where they are out of sync – multi-second delays in OLTP, or very long responses in BI, we have that jarring perceivable delay that you mention. We forget that that delay only exists because of technology. As the technology changes so do the trade-offs and the way we are able to use it, so it makes a qualitative difference to the way we work, not just a quantitative speeding up.
The other great thing about the term is that it puts the emphasis back where it belongs – on the human. I’ve noticed that when we talk about real time we’re often seeking examples where we absolutely have to have a response in X milliseconds – that is, focussing on the real time process and moreover looking for an extreme example of it. However, I guess that when looked at from the other direction, from the human point of view, we’d like all our interactions to be in ‘real time’. That way they keep pace with what we are doing in the real world, both working with systems and interacting with others. It’s actually unnatural to have delays. So maybe when people talk about ‘Google and smart devices training people to expect instant response’ it is actually the opposite, that desire has always innately been there, it’s just we’ve had decades of technology constraints training us to accept delays. Hence the recent discussion of this arising now as the technology has the potential to get us back to what is actually more natural and normal.
Anyhow great post, both helped and clarified a lot of things for me, many thanks.
(BTW these are my own personal and individual views)
How about willing-to-wait, with a k001 acronym of WTW? Acceptable response time would be a human factors research term from wayback. It’s an ART!
I know I get frustrated with the stupid non-sequitur pauses in voice generation systems.
I’m not a fan of overloading “real time”. For example, I find it really annoying when SAP says that Hana is “real real-time”, when nothing could be further from the truth. Real real-time developers should be insulted.
There’s a long-established meaning for “real-time”, and adding an adjective on the front, like “human”, doesn’t help. It just creates confusion. People will think that “human real-time” is a sub-class of real-time (inheriting its properties), when in fact, it’s unrelated.
I think we need a different term. What we really mean is just “fast” or “responsive” or “interactive”. Like a game.
I’m not sure what the right word is, but I know it doesn’t involve “real-time”.
Greg
Greg,
I’d say that “real-time” in common usage would have a subclass for what the purists want to call “real-time”.
The purists’ use of the term is probably the older one, but that’s not dispositive, as per Monash’s First Law of Commercial Semantics: Bad jargon drowns out good.
I like it.
When programmers talk about “real-time” they mean something that is completely different from what users expect. But that doesn’t stop them from using the term when speaking to users.
I don’t think we need to get hung up on exact definitions, it’s pretty clear that people generally would regard ‘real time’ as meaning ‘keeping up with what’s happening’ or as Curt says ‘fast enough that humans don’t perceive an annoying lag’ rather than the more purist meaning. We can argue as to whether this is the fault of Hollywood or TV shows but it has entered the language in this more general form, and for many/most business users this colloquial usage is what they understand.
Users find the idea of a system that keeps up with the pace they are working at attractive. Classic real-time systems, e.g. for process control, tend to have a very narrowly defined scope, but deliver response time that is much less than a human typically needs. So, maybe a human real time system is one that provides a wider range of functions,( e.g. encompassing ad hoc query), but delivers the answer fast enough for there not to be that annoying delay that impedes the thought process.
So the new pre-qualifier ‘human’ can make this clear what kind of system we mean, and maybe that in turn allows us to be more precise when we use the original unqualified definition.
Thought: when we use the term real time it also begs the question as to how time is being perceived and measured. The new phrase makes clear that what’s important with these systems is the perceptions of the human involved. The original systems can still be judged by their responsiveness to non-human events, through strict service levels in responding to physical processes that require faster than human reaction times.
The more I think round this the more I like it, it reminds us that we should be producing systems that fit with the way people naturally work, and looking at it from their perspective. Thus in this case the prime goal of the [human] real time process is simply to serve the human and support their natural flow of investigations and tasks.
I think Human Time is better than Real-Time, but only insofar as it’s not overloaded – yet. But for it to be truly meaningful,it needs to be more clearly defined then “fast enough that the human perceives it to be real-time, but not machine-time.” What does that mean? Since it takes a human about 50 milliseconds to press a key, does that mean that Human Time = 50 milliseconds?
So let’s double it and say that Human Time = about 100 milliseconds. The next question is WHAT takes 100 milliseconds?
For example, a trader at a bank would define, Human Time as the time it takes to hit a button to “buy” a stock and when I see on my screen that an execution has been completed. Let’s break down the technology steps to perform this task:
1. Trader presses button on GUI (“buy”)
2. GUI to send message to server (publish a message on 29 West)
3. Execution management system sends trade to exchange (sends a FIX message)
4. Execution management system receives confirmation (receives FIX message back)
5. Execution management system sends message on bus (publish message on 29 West)
6. Trader’s machine receives message from bus and unpacks
7. Trader GUI refreshes
Trade executions happen 10’s of millions of times a day.
If Human Time = 100 milliseconds, and 50 are gone in pressing the button, that means we have 50 left!
So what part of that flow is “Human Time?” Traders would say Human Time is the time it takes from when I hit the button to the time the trade shows as executed on my GUI. In other words, all 7 steps in 100 milliseconds. And that’s about right for doing all 7 steps completely via messaging.
Trading systems like this use messaging, not databases, because even with continuous loading, they can’t afford the latency hit of a database loading 10’s of millions of events a day; databases sit in the back office. So what is a Human Time warehouse?
From a “Human” perspective, this workflow PUSHES updates to the GUI via messaging users expect a live, auto-refreshed view of trading results. In other words, there can be no warehouse step where the user re-sends his query to the warehouse.
So in this example, there is NO room for either a database or a traditional client/server model of updating a user screen, because they truly expect “Human Time.”
So, what is the definition of Human Time?
– Mark Palmer, CEO, StreamBase
P.S. Of course, as you know, this is why we built LiveView with in-memory active tables, messaging, and push-based GUI updates. So I would argue LiveView fits in this definition of Human Time, but I am trying to understand how we fit into your definition (or don’t!)
Mark,
I’d say that it depends on WHY the trader wants to see an answer within 100 milliseconds. If he really needs confirmation of the trade going through before he can do anything else, that would argue for using the term. Traders are an admittedly unusual subclass of human beings.
But if that’s not really the point, and if it’s really just a question of doing the trade as fast as possible so as to get in ahead of any other market players, that’s a different matter.
Good question. There are many reasons why a trader because he needs to guide the automated workflow and take action when thing either are going well or broken. The complexity I left out is that at a large bank, “an order to “buy 100,000 shares of IBM” might actually take a long time depending how aggressive the trader wants to be with their execution algorithm… 100’s of tiny executions might be required to fill the order.
So out of the box, the trader will want Human Time acknowledgement that the execution system has started to work the order. If not, then there might be a problem with how the order was entered, or perhaps a problem with the exchange it was routed to. The trader might have to act as a result – for example, re-enter the order, or route it to another exchange.
Or, if a large order begins to be filled, but algorithm isn’t performing according to his benchmark / expectation, he might adjust the way the algorithm works by canceling and re-submit the order.
Or, if the order is for a client who wants to cancel, then he really needs a “Human Time” update that validates that the order is actually cancelled, because in the world of high frequency trading, trades can be completed in microseconds, much faster than a human can press a key.
And so on – today’s trader at a broker is actually *guiding* an automated, machine-driven process. So his insight has to rival the speed of the the thing he is managing.
Mark,
I think you’ve made a convincing case that human real time can go well down into the sub-second range.
At a certain point we’re below the time threshold for serious decision-making, but rather in the range of “higher frame rates make for smoother and less disconcerting pictures”. So be it. 🙂
Yes, no doubt a subset of Human RT requirements need 100ms or less response times.
But think this illustrates what we’ve said above, the perception of realtime depends on the human involved.
If we focus on the term realtime and set a fixed response time bar we have to work with one inflexible number.
But if we focus on the human user we see that, depending on the task, a whole range of task dependent response times would be regarded as realtime. E.g. The simple and common case in BI of wanting to run a query to contibute to an ongoing discussion, if that could be done without breaking the flow of the human conversation then I’d say the participants would think they were working in realtime.
A way of assessing the benefits of this might be to take Curt’s different examples of analytic speed and noting how many users might benefit from having their response time expectations met. There will be those who need then100ms response, many for whom hours or days for planning might suffice but between them a very large mass of users where a response that allows ‘train of thought’ or ‘keep up with my conversation’ would provide worthwhile productivity benefits.
[…] if you want human real-time interaction, Hadoop MapReduce is not and likely never will be the way to go. Getting Hadoop […]
[…] Both operational and root-cause: Real-time monitoring (or more precisely human real-time). […]
[…] application focus is on “closed-loop” customer intelligence apps — I presume in human real-time — much as might be the case for WibiData (to which Continuuity views itself as potentially […]
[…] trees, funnels, etc. – but I haven’t seen anything yet that seems great for navigation, or for human real-time […]
[…] for example network monitoring or stock trading — there’s a good chance you need fully human real-time BI. Otherwise, how much does a 5-15 minute delay hurt? Even if you’re monitoring website […]
[…] is talking up what I would call its human real-time strategy, which includes but is not limited to Flume, Kafka, and Spark Streaming. Cloudera also […]
[…] are many advantages to quick results, “real time” or […]
[…] What matters in most cases is human users’ perceptions of speed. […]
[…] There’s a lot of interest in data warehouses (perhaps really data marts) that are updated in human real-time. […]
[…] data warehouses (perhaps really data marts) that are updated in human real-time […]