Aftergetting caughtusing an algorithm to write dozens of articles , the technical school publication CNET has apologized ( sorta ) but wants everybody to know that it definitely has no intention of calling it quits on AI journalism .
Roughly two calendar week ago Futurismreportedthat CNET had been using an in - house artificial word program to write droves of fiscal explainers . The articles — some 78 in total — were published over the course of two months under the bylines “ CNET Money Staff ” or “ CNET Money , ” and were n’t directly assign to a non - human author . Last hebdomad , after anonline uproarover Futurism ’s findings , CNET and its parent caller , media firm Red Ventures , declare that it would be temporarily beseech “ intermission ” on the AI editorials .
It would come along that this “ pause ” is n’t going to last long , however . On Wednesday , CNET ’s editor and senior vice president , Connie Guglielmo , publish a newstatementabout the scandal , in which she observe that , eventually , the way out would stay to use what she called its “ AI engine ” to indite ( or help write ) more articles . In her own words , Guglielmo said that …

Image: II.studio (Shutterstock)
[ Readers should ] … have a bun in the oven CNET to stay exploring and testing how AI can be used to help our teams as they go about their work testing , researching and craft theunbiased advice and fact - based reportingwe’re cognize for . The cognitive process may not always be easy or pretty , but we ’re going to continue embracing it – and any unexampled technical school that we believe makes life better .
Guglielmo also used Wednesday ’s piece as an chance to address some of the other criticisms aimed at CNET ’s dystopian algo — namely , that it had ofttimes created content that was bothfactually inaccurateand potentiallyplagiaristic . Under a section titled “ AI railway locomotive , like humans , make mistakes , ” Guglielmo copped to the fact that its so - called engine made quite a few mistakes :
After one of the AI - assisted stories was cited , rightly , for factual errors , the CNET Money column squad did a full audit … We identified additional stories that expect correction , with a small phone number requiring substantial chastening and several narration with minor issuing such as uncompleted fellowship names , reversed numbers or language that our senior editor in chief viewed as vague .

The editor also acknowledge that some of the automated articles may have not occur the snuff examination when it comes to original content :
In a handful of stories , our plagiarisation chequer pecker either was n’t properly used by the editor or it failed to catch sentences or partial sentences that nearly resembled the original language . We ’re developing additional ways to flag accurate or similar match to other published content describe by the AI tool , including automatic citations and external links for proprietary data such as data points or lineal quotes .
It would be one affair if CNET had very publicly announced that it was engaging in a bold fresh experimentation to automatize some of its editorial tasks , thus let everybody live that it was doing something new and weird . However , CNET did just the opposite of this — quietly rolling out article after clause under shadowy bylines and clearly hop-skip nobody would notice . Guglielmo now accommodate that “ when you read a tale on CNET , you should know how it was created”—which seems like received news media ethical motive 101 .

CNETTechnology
Daily Newsletter
Get the best tech , science , and refinement news in your inbox daily .
News from the future , turn in to your present .
You May Also Like




![]()








![]()