Eaton boosted

I'm an IT generalist. I can troubleshoot 801.x authentication based on event logs. I can write basic SQL queries to ascertain if data is there. I know routine basic coding errors in the theory of user authentication.

And that is enough. It is enough for almost anything I get called about. And that is why I am pinged by random people chatting me each day. When I am not enough I get you to the specialists. Because I know they exist. Because I know what I don't.

Show thread
Eaton boosted
Eaton boosted

Follow question for the audience related to my earlier VLSI/EDA history spelunking: does anyone who was there at the time (or knows the history well enough) know definitively which chips looked or felt like EDA breakthroughs inside the industry? I’m getting the impression “10,000 transistors” was the sort of late-70s breaking point for human design, but I’m struggling to parse which chips were understood to have clearly pushed past human limits via automation. Or if it was even discernible in a single generation, or more like spread over a few.

Like .. it looks like maybe it was around the m68k or ns32k? But I’m not sure.

Eaton boosted

me: working from home gives me the opportunity to focus on tasks free from the the distractions of a noisy office!

brain:

TROUT

TROUT

LET IT ALL OUT

THESE ARE THE FISH I CAN DO WITHOUT

SALMON

I’M TALKING TO YOU

SALMON

me: cool never mind

Eaton boosted

Well, fuck. The GPT disinformation age is now.

I googled "OS for 4gb ram" and the first hit, which also was used by google to populate its snippet is an answer from quora which is very obviously created with #chatgpt (I recognised the non-committal non-answer a away, but it can also be detected by a popular GPT detector).

The user has 98 answers and, you've guessed it, they are all created with GPT.
PLOT TWIST: The questions where also created with GPT!

https://www.quora.com/profile/Heri-Mulyo-Cahyo

#AI

Eaton boosted

My colleague, John F. Hughes, got asked for a paper that he hadn't written. The correspondent replied: "I used the chat.openai to gather new sources that I might have missed during my own scan of literature".

(I know librarians are being driven bonkers by GPT-3-manufactured book titles. But this is the first time I've heard of an academic paper request…)

Given the way AI and multi-channel re-use conversations are going these days, I'm really starting to feel like getting interested in generative grammars a decade ago was ADHD doing me a solid.

Last night's thread on the difference between pattern libraries and design systems — and the danger of assuming AI will "free" us from bland UX norms — ran long enough that I turned it into a post on the company blog.

It touches on multiple topics we've been discussing for the last few years — and I suspect the issue won't go away any time soon.

autogram.is/discussing/pattern

Eaton boosted

Design systems *are* the grammar and syntax and vocabulary of this stuff we build. Developing new and innovative solutions to emerging needs and problems means using those systems, whether the usage is intentional, intuitive, or unthinking.

That's true today and always will be — whether the grunt work is being done by. UX auteur, an overworked junior designer… or an AI.

Show thread

The solution is to engage with the concept of a design system as a language — one that can evolve through use to meet emerging needs. One that can vary contextually but retain consistency across many communities. And one that is only successful when it enables groups of creative humans to communicate effectively.

Show thread

Just as there are pedantic language prescriptivists who'll harangue you for dangling a participle when you're trying to explore a new idea in writing, humanity's breadth means there are design prescriptivists who'll heckle your exploratory UX for its lack of sandblasted uniformity.

The solution, though, isn't to abandon "systems" for "ideation" and let the AIs do the grunt work. LLMs are — quite literally — just regurgitating the patterns of past systems, reskinning them without understanding.

Show thread

Humans who speak a language combine the building blocks of its vocabulary according to the rules of its grammar. That's true even if their understanding of the grammar is hazy and informally acquired while trying to communicate with other speakers.

Design systems that transcend 'pattern libraries and component repos' are created and used in a similar way. Whether intentionally or intuitively, they are the repeated consistencies — the patterns — in the way problems are solved and work is done.

Show thread

LLMs create new things in the sense that Madlibs books write new stories: They can generate previously-ungenerated output, particularly in cooperation with a creative person who knows how to nudge them in interesting directions.

But they are unalterably, inescapably anchored to the patterns past humans invented and popularized; patterns they were trained to mimic.

Patterns we call 'grammar' or 'plot' or 'genre.' Patterns we call 'design systems.'

Show thread

Once you look past the HDR stock art with too many fingers, the AIs they say will do the grunt work of building our new innovative UXs are actually straightforward:

Large Language Learning Models sink enormous up-front time and money into collating patterns in existing human-created content. Then, prompts ("Too many noses," "High def skunks making spreadsheets," "A blog post titled 'Down with the design system,'" etc) are matched with one or more previously-found patterns and "populated."

Show thread

I've been playing a lot with the past few generations of LLMs, particularly applications of their distinctive parroting to non-text problems. Using prompts to generate music, say. Or code. Or React components. Or page layouts.

I'm going to be generous and say that the future of AI UX the author dreams of might be a bit farther off than they suggest, but it *is* achievable.

What will get us there, though, isn't an explosion of Kai's Power UX experimentation. It'll be Design Systems.

Show thread

And that brings us to the kicker. Sure, they admit, we needed design systems (cough, pattern libraries) back in the old days to speed things up. Now, though...

"With the help of AI, we can generate in seconds, for pennies, a GUI for each user, a GUI for each moment, for each circumstance.

Show thread

The author seems to understand the underlying problems with "stack the patterns" design work. The list of "Component X doesn't fix Design Problem Y" punchlines that anchors the article's midpoint is exactly why a working system is more than a component library.

It's a bit like a language: if a writer's only engagement with it is grabbing words from the dictionary and heaping them together until "message" is achieved, things are going to suck. Language is more than a bag of words.

Show thread

That point is one @beep often returns to when I see him talk through these issues with clients; a library of UX components is one common part of a design system, but the *system itself* is something bigger.

A good one is also a shared set of strategies for solving visual and interactive communication challenges, a playbook rather than a script.

A good one, critically, is designed to grow and evolve in the same way the author says “design systems” prevent.

Show thread

I can't imagine anything further from the reality of "design systems" I see with most of our clients.

There, inconsistently-funded design system teams must continually explain that they care a lot less about The Approved Button Styles than they care about getting *multiple inconsistent and evolving products* to work together more effectively. To get the teams who work on them speaking the same language.

Show thread
Show older
phire.place

phire.place is a Hometown instance for friends of friends. We care about mutual aid, trans rights, and reproductive justice. Hometown is a fork of Mastodon, a decentralized social network with no ads, no corporate surveillance, and ethical design.