AI User Interfaces - follow the input


I’m exploring a few ideas about how AI might be used in the near future. Currently we’re all interacting with LLMs through chats pretending to write text slowly, but I don’t believe this metaphor will stay around. While chat was incredibly effective in communicating why LLMs are different from non-generative algorithms, it’s not very efficient for every day use.

So how else would AI be used?

My first idea: what if AI followed your focus?

The simplest way I could think of showing this would be using a mouse pointer. Let’s assume you have a local LLM installed and it’s following your mouse pointer as input.

When you hover something it can describe what you’re hovering, or offer contextual information.

Examples

Hovering a folder would give you context about the folder. I cursor explaining the context of a folder

Hovering an image would describe the image for you. AI cursor describing a photo

And selecting something, like a piece of text, the LLM would offer you local actions it could take for you. Everything from copy text to rent on AppleTV. AI cursor giving you actions based on your selection

Summary

In this world your personal LLM would act as a local assistant, making interacting with your computer a lot easier.

It could guide a novice user by using natural language to explain what to do next.

It could also take natural language commands, spoken to written, to perform shortcuts to actions that today might require memorising complicated commands.

Natural language as the interaction layer is an interesting ideas. And I have no doubt we will see this from Apple or Samsung in the near future.


Categorydesign