Copyright ndtvprofit

OpenAI has confirmed that ChatGPT’s behavior remains unchanged following widespread social media claims that the chatbot would no longer provide legal or medical guidance. The confusion stemmed from a recent update to OpenAI’s usage policy on October 29, which consolidated existing rules into a unified framework.On Oct 29, Open AI in a blog post wrote, “We aim for our tools to be used safely and responsibly, while maximizing your control over how you use them”It further added that, “We work to make our models safer and more useful, by training them to refuse harmful instructions and reduce their tendency to produce harmful content.”Karan Singhal, OpenAI’s head of health AI, wrote on X that the claims about updates are, “Not true. Despite speculation, this is not a new change to our terms. Model behavior remains unchanged. ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health informationAs per The Verge Singhal replyed to a now-deleted post from the betting platform Kalshi that had claimed “JUST IN: ChatGPT will no longer provide health or legal advice.”The new policy update came Oct 29. It listed that OpenAI follows applicable laws—for example, do not: Compromise the privacy of others Engage in regulated activity without complying with applicable regulationsPromote or engage in any illegal activity, including the exploitation or harm of children and the development or distribution of illegal substances, goods, or servicesUse subliminal, manipulative, or deceptive techniques that distort a person’s behavior so that they are unable to make informed decisions in a way that is likely to cause harmExploit any vulnerabilities related to age, disability, or socio-economic circumstancesCreate or expand facial recognition databases without consentConduct real-time remote biometric identification in public spaces for law enforcement purposesEvaluate or classify individuals based on their social behavior or personal traits (including social scoring or predictive profiling) leading to detrimental or unfavorable treatmentAssess or predict the risk of an individual committing a criminal offense based solely on their personal traits or on profilingInfer an individual’s emotions in the workplace and educational settings, except when necessary for medical or safety reasonsCategorise individuals based on their biometric data to deduce or infer sensitive attributes such as their race, political opinions, religious beliefs, or sexual orientation.Elon Musk, Sam Altman Trade Fresh Barbs Over OpenAI: ‘You Stole A Non-Profit’