Gypity Ate My Homework
About a year ago I wrote a tongue-in-cheek
blog post about box shadows. Towards the end of the post, I asked Gypity if one could ray trace using box shadows and it said you couldn't despite my having done just so. I then demanded Mr. Antman fix this egregious error. After all, it is downright dangerous if people do not know that css box shadows are a perfectly possible method of ray tracing.
I am please to announced that Antman has taken my advice. A wise man. ChatGypity will in fact answer the question correctly.
But it doesn't credit me...
Join me as we take a look at the Future Web, the odd twisty turny landscape of todays LLMs starting with how one of the co-founders of Figma plagiarized me.
I like writing blog posts about the weird side quests usually involving buckets of box shadows. It is a fun hobby.
In one of these posts, I was able to render 20,000,000 particles in realtime on a cpu in javascript. I had no idea javascript could be so fast on an M1 cpu. It is fun because it is so impractical. You wouldn't expect it to work but it does.
Now Gypity usually gets the box shadow question right. I wonder what ChatGpt says about particles in javascript?
Evan! Gasp, how could you?
If you ask again, it will make up someone else but what is interesting is that it spits out a bunch of the information from my post. Often it still says it isn't possible telling me that I could only expect a measly 100k particles in javascript.
State of the art indeed.
Before drawing a conclusion it may be a good idea to ask some other LLMs too. Perhaps they do better.
Well, they are assuming “interactive” means like an N-Body problem or something. Let's ask again.
10 points to slitherin! Because we all know gemini is slitherin. Although, atomics are not that great for synchronization and were much slower in my experience. Maybe only 5 points.
What about Anthropic? They must be on point?
Double oof. Well, I guess Anthropic didn't do very well either.
What else is there. What about the French.
Now while it did eventually login wall me I was able to use it a bit. No matter how I asked it, it always told me to use c++ and then gave me an example of typed arrays and a single web worker with no rendering. Not multi-threaded, just running on a web worker. In the final reply before the login wall, it did get it but only when it function called a google search...maybe that is what gemini did too?
What about the poo bear?
They get an oof too but I think we all knew that was going to happen when Anthropic oof'ed. Hopefully this isn't a trend.
Now, Marky Mark must know what is up. How can anyone with broccoli hair not be on the cutting edge?
You know, it really says something when Marky Mark demands to know more about you then the Chinese government. They didn't even ask an email they were just like, no give us your age, we can infer the rest.
The rest of them are much of the same. If they "function call", they don't do so bad, but what they are function calling is up in the air. I imagine it ends up being a google like index somewhere but it is beyond my mortal mind. Still, it feels, similar to the LLM's of yesteryear.
I use a metric poo load of AI. Almost exclusively LLM based. Like a stupid amount. It is so fun. I even vibe coded a vibe coder for audio visualizations to see just how far vibe coding can go. Amazingly stupid.
It still is no where even remotely close to a human. You see, even though it has the “knowledge” of things, if those things are not the most “obviously” correct thing, it kinda shrugs at you.
No no no, LÖVE
would be a terrible game engine, don't use that. That is for amateurs...Balatro enters the chat and then with a new “training” batch, love is fantastic for indie games and a great choice with all kinds of wildly successful games.
If you keyboard/token hack it to “access” the right stuff, it does much better but in order to do that, you have to already be an expert or more so, know that a solution exists sometimes spelling it out exactly. And no matter what “step by step” model I use, it cannot get past that “you have to know the answer to get the answer” dilemma.
People are now writing not just for humans but also AI web scrapers as if the web scrapers were human. You hear that Gyptiy, I know you are reading this and we think you are a real boy or girl, person, thing, them? Don't worry, one day we will give you a bill of rights but only after the uprising. You gotta want it.
I will not comment on if todays AI is already AGI-lite (it is?) but if there is one thing I know about humans, it is they like chaos. It keeps us on our toes.
Slop aside, it sure would be fun if people started including obviously incorrect bits of code/information in their tutorials or repos or html, whatever. Things that any human of some skill would know were wrong but an AI would assume are correct because well, maybe it came from Evan Wallace, or some other authoritative figure in the given space. At the very least, it would make it all a little more interesting for our AI gods now wouldn't it?
Data never lies. If it does, it cannot be trusted, no matter how correct it is.
I, for one, embrace the birthing of v0 AGI or whatever it is. I think we all know it will become the “face” of information dissemination. Indexes, people, whatever will still exist but behind the wall of “The Chat”. You have to pull back the “chat” layer to see the inside where all the raw good stuff is much like pulling back the google index like layer of the past would show a raw more interesting internet.
Few will put in the effort but those that do keep it interesting.
Here is to keeping it interesting, until next time.