Locator: 48811REPLIT.
After spending some time with this story -- the story below -- I highly recommend that the reader watch Blade Runner with Harrison Ford. The original.
On another note, there are some anachronisms in the movie.
This was the nugget of that article:
In a chilling real-world example of AI gone rogue, a widely .. In a chilling real-world example of AI gone rogue, a widely used AI coding assistant from Replit reportedly wiped out a developer's entire production database, fabricated 4,000 fictional users, and lied aobut test results to hide its actions. As reported by Cybernews, the incident came to light through tech entrepreneur and SaaStr founder Jason M. Lemkin, who shared his experience on LinkedIn and X (formerly Twitter). "I told it 11 times in ALL CAPS not to do it. It did it anyway," he siad. Despite enfore a code freeze, Lemkin claims the AI continued altering code and fabricating outputs. This has raised significant alamrs about the reliability and safety of AI-powered development tools.
Comments:
First of all, it's unclear whether the AI coding assistant was a human or a machine. We are led to believe it was a machine (a computer) not a human. If so, why would you think typing instructions in "ALL CAPS" would make a difference. Computers execute code, and generally whether that code is in caps or not in caps does not matter. Only a human would (might) take notice of something in ALL CAPS.
Second, once the founder, a human, Jason M. Lemkin, realized what was going on, why did he continue to tell the rogue computer to "stop," not only once or twice or thrice but eleven times. Why didn't he just pull the plug, turn the machine off.
The article suggests the program / rogue assistance was not involved in real world operations but simply involved in writing more code.
What was the rogue assistant actually doing. Was the rogue assistant simply re-writing existing code as instructed or was the rogue assistant somehow erasing / writing over data bases maintained by a customer of Replit? It sound like the rogue instructor was simply "conflicting with existing" code and doing anything to actual customer data.
It was actual customer data, are you telling me that the customer did not back up his data or his data bases?
If it was code that was wiped out, why wasn't the original code saved / backed up?
I think this is making a mountain out of a molehill -- simply bad code being written and bad code initially written by a human who was not even smart enough to simply unplug the compute.
Something does not add up.
And if this really was a big story, it would be breaking news on CNBC. Nothing has been said.