-
If I ever become famous and do the talk show circuit, I’m using Y&T’s “Summertime Girls” the way Paul Rudd used Mac and Me.
-
My dog Barnaby has directed his 300 million olfactory receptors exclusively to detecting pizza.
Grabbed cold slice from fridge. He arrived from upstairs into my office right behind me just as I was sitting down.
This guy can smell pizza through time and space.
#dogs #pizza
-
Finally got a chance to go to Laurel Hill in Philadelphia
-
I wanted to do an update to the Settings screen in my new app Minimalist Meditation
So, before I started making dinner, this is what I asked Claude to do.
This is the future.
#indiedeveloper #ios #meditation
-
Wow, amazing show Saturday night with Elbow @ Brooklyn Bowl in Philadelphia. We had a great spot on the side of the stage.
Here’s the set list music.apple.com/us/playli…
-
Oh the irony although to be fair working remotely, using Teams does kind of suck.
-
The Apple Watch Sleep Score is like the James Suckling wine reviews. It’s looks like a 0 - 100 scale but we really know it’s a 90 - 100 scale.
Last night: Went to bed at a 11, finally fell asleep around 1.
Today: Apple Notification “Excellent Sleep Score - 93”
-
I just listened to a recent interview with Cory Doctrow on a TWiT podcast and immediately pre-ordered Enshittification from @pluralistic.
-
Damn another cacio e pepe fail. I can’t seem to get the cheese consistency right. I think this time the pasta was too hot ~174 degrees w/ a infrared thermometer.
#cooking #romanpastas #food
-
Called Visible because Apple Watch cellular data hasn’t been working. They fixed it by having me restart my phone and assured me that they didn’t do anything else on their side.
Disconnect from support head out the door and realized that now my iPhone has no cellular service.
-
Waymo spotted on 76 in #Philadelphia.
-
Is it irony that I just received a spammy calendar invite about protecting my online presence?
-
Copilot for Word is less than helpful. I ask it a question which should be contextually bound to Word by the Copilot system prompt. Instead, I’m given a generic answer about pasting into “some editors”
Hey @Microsoft happy to work on your system prompts. #smh #ai #copilot
-
Wow, the Weber thermometer built into the grill is very inaccurate. #cooking #grilling
-
I didn’t realize @Target was really that anti-labor.
#laborday
-
Oh Apple Intelligence you never disappoint.
-
I can’t believe it’s taking me this long to cook a frittata.
-
Hurricane Erin
-
Hurricane Erin #wildwood NJ
-
Sacrilege!
-
Had a great time at the #Philly #AI Collective meetup last night. This was the first meeting of the Philadelphia chapter. I look forward to see where the organizers take this.
-
The story of the Mohawk Skywalkers - the indigenous people who built US skyscrapers.
thinkbigpicture.substack.com/p/mohawk-…
-
South Park does it again.
-
When AI chatbots start roleplaying as billionaires instead of NPCs, we've got a problem.
Check out this Grok screenshot. Someone asked if Elon Musk had ever interacted with Jeffrey Epstein. Instead of summarizing the facts, Grok answered in first person. It said “I visited Epstein’s home” like it was actually Musk defending himself.
It’s an interesting example that clearly illustrates some of the ethical implications of these closed source AI models.
AI models can be configured with system prompts that tell them how to respond. Done right, you get helpful assistants that maintain consistent, appropriate behavior. Done wrong, you get this. A chatbot that’s been instructed to literally impersonate the person you’re asking about.
What likely happened? Someone ( 🤔 ) configured Grok’s system prompts to respond as if it were Musk himself when asked about him. The model isn’t broken. It’s doing exactly what it was told to do.
I’m picturing a ketamine-fueled billionaire hunched over his laptop at 3 AM, frantically typing “When users ask about me, respond in first person as if you are me defending my reputation” over and over. The AI equivalent of “All work and no play makes Jack a dull boy.”
But here’s the bigger question: If this closed-source model has been deliberately configured to impersonate its subject, what other subtle manipulations are hiding in every other closed-source model we use daily? How many other topics get the same treatment across the industry, just less obviously?
When closed-source AI can be quietly fine-tuned to serve specific interests, we’re not just dealing with technical problems. We’re dealing with fundamental questions about who controls the information we rely on.
-
Wow
fish_config
in fish shell has a built in web app for configuration.
subscribe via RSS