Interface Culture – Agents

Posted: July 24th, 2003 | No Comments »

By reading Steven Johnson on “Agents” made me realise Pascal and I were not too far off while developing our Negotiating Agent Development Kit back in the good old days. We actually implemented quite a few things Johnson wrote about in 1997.

A few Steven Johnson’s thoughts:

“[...] digital automatons have become a major topic of discussion within the industry. These designers are, in a sense, the imaginative heirs of Hoffmann, working through the risks and the promise of mistaking machines for humans, resolving our anxiety about such a mix through works of profound creativity.”

“The new interface paradigm brings us close to Olimpia’s glassy stare: instead of space, those zeros and ones are organized into something closer to an individual, with a temperament, a physical appearance, an aptitude for learning – the computer as personality, not space. We call these new creatures – these digital “personalities” – agents”.

“Agents also differ in their preferred habitats. Some of them are shut-ins and sycophants: they settle into you computer’s hard drive and stay there for good, watching your behavior and helping out when they get a chance. Other agents are full-time tourists, roaming across the Net in search of information and trudging back home only when there’s news to report. Some agents are extroverts; they compile relevant data for you by chatting with other agents, swapping stories and recommendations. These three classes represent the range the possibilities for agent-drivent interfaces: the “personal” agent, the “traveling” agent, and the “social” agent. Each implies a different understanding of human-computer interaction, [...]”

“They do things for you. Being able to delegate responsibility to a software agent can be enormously liberating traditional interface gives way to a more oblique system, where your commands trickle down through your representative”.

“The orgininal graphic-interface revolution was about empowering the user – making “the rest of us” smarter, and not our machinges. Agents work against that trend by giving the CPU more authority to make decisions on our behalf.”

“The traveling agent imagined here is probably the most clearly realized model of the agent-as-representative idea: the agent represent you in its dealings with other agents, shifting through an entire repertoire of personae over the course of a day: a clothes buyer one moment, a personal secretary the next”.

“Telescript, technically speaking, was a communications protocol, not an operating system. It provided a common language that agents could rely on when negotiating their deals, a lingua franca for the ‘bot community.”

“This agent is not an inert document like an e-mail message, but rather a miniprogram of sorts, one that will continue to run on a remote hard drive long after you’ve disconnected from the service and returned to more pressing matters.”

“Today’s advertising agencies will become tomorrow’s counteragent agencies.”

“But autonomous agents like those envisioned by Telescript will soon appear on the Net, one way or another: Will they result in Jaron Lanier’s brutal ecology of agents and counteragents? Or is there another, less menacing alternative?”

“As long as the agent only executes clearly defined commands, it’s hard to see a problem. The more mathematical the commands, the better: An agent dispatched to buy a plane ticket for less than five hundred dollars or a technology stock with a price-earnings ratio below then – this sort of agent hardly poses a threat to “the future of culture and society”.

“The real breakthrough, we’re told, will come when our agents start anticipating our needs – the intelligent agent that makes an appointment with the nutritionist after noticing the number of pizza delivery charges on the monthly credit card bill, or has flowers delivered a day before that anniversary you’re always forgetting.”

“The only thing worse than receiving a piece of junk mail is being duped into opening one”.

“I don’t really want my computer guessing what information I am looking for. [...] What I want is a better way to get to that information”

“The trouble begins when our agents start medling with our subjectives appraisals of the world, when they start telling us what we like and don’t like, like an astrologer or a focus group. This is where Lanier sees the dance of the bubblebee emerging, all those counteragents luring regular agents with their info-pollen”.

“But how are our agents going to get that smart? Lanier thinks they won’t, not ever: The humans will just get more stupid, in a kind of regressive coevolution. Tastes, after all, don’t translate easily into simple formulas.”

“What you want the agent to do is find those obscure records that you’ve never heard but that match your tastes perfectly.”

“All the agent wants to do is make an educated guess and be rewarded with a high rating.”

“The human-machine hybrids of Blade Runner and Frankenstein came into the world as simplifiers, problem solvers, labor savers. But by the end they were chaos machines. We can only hope the true cyborgs of the future turn out to be more stable than the fiction.”