Summary MDN's new "ai explain" button on code blocks generates human-like text that may be correct by happenstance, or may contain convincing falsehoods. this is a strange decision for a technical ...
It may do more harm than good, it spits plausible answers that are either completely or subtly wrong (latter is worse obviously) and it’s not easy to discern how good an answer actually is
It may do more harm than good, it spits plausible answers that are either completely or subtly wrong (latter is worse obviously) and it’s not easy to discern how good an answer actually is