![]() The tech giant, however, maintains that AI ethics remain a top priority. "If ethics aren’t positioned to take precedence over profit and growth, they will not ultimately work." "AI ethics has taken a back seat," former Google manager and president of the Signal Foundation Meredith Whittaker told Bloomberg. Google's decision was likely a desperate move to catch up with the competition, with OpenAI racing ahead with its highly popular ChatGPT, despite the tech being in a seemingly underdeveloped state.Īccording to Bloomberg, Google employees tasked with figuring out the safety and ethical implications of the company's new products were told to stand aside as AI tools were being developed. In short, it was a complete disaster - yet, as we all know, the company decided to launch it anyways, labeling it as an "experiment" and adding prominent disclaimers. ![]() In a February note, which was seen by nearly 7,000 workers, another employee called Bard 'worse than useless: please do not launch." Another tester called it out for being "cringe-worthy." A different employee was even told potentially life-threatening advice on how to land a plane or go scuba diving. The AI was a "pathological liar," one worker concluded, according to screenshots obtained by Bloomberg. Google asked around 80,000 of its employees to test its still-unreleased Bard AI chatbot before it released it to the public last month, Bloomberg reports.Īnd the reviews, as it turns out, were absolutely scathing. "AI ethics has taken a back seat." Pathological Liar
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |