DALL·E 3 is now available in ChatGPT Plus and Enterprise
OpenAI has implemented a multi-tiered safety system to prevent DALL·E 3, their AI image generation tool, from producing harmful content. This includes limiting the generation of violent, adult, or hateful imagery. The system conducts safety checks on user prompts and the resulting images before they are shown to users. OpenAI also collaborated with early users and experts to identify and address potential safety gaps, such as generating graphic or misleading content. Additionally, steps have been taken to avoid generating content resembling living artists or public figures and to improve representation across generated images. OpenAI values user feedback as a means of continuous improvement and encourages feedback on unsafe or inaccurate outputs. They are researching a provenance classifier that can identify if an image was generated by DALL·E 3, achieving high accuracy in internal testing. However, further development is needed before definitive conclusions can be drawn. OpenAI acknowledges that collaboration across the AI value chain, including with content distribution platforms, is necessary to address the challenge of identifying AI-generated audio or visual content.
DALL·E 3 system card
DALL·E 3 is an AI system developed by OpenAI that generates images based on text prompts. It is an improvement upon its predecessor, DALL·E 2, with better caption fidelity and image quality. The development and deployment of DALL·E 3 involved several steps to address potential risks and mitigate any undesirable behaviors.
One important aspect of the preparation process was the inclusion of external expert red teaming. This involved seeking feedback and input from external experts who evaluated the system for any potential risks or issues. The goal was to ensure a thorough assessment of the model's capabilities and limitations.
Additionally, the OpenAI team conducted evaluations to identify key risks associated with DALL·E 3. This involved considering possible scenarios where the model could produce undesirable or harmful outputs. By understanding these risks, the team could develop effective mitigations strategies.
Overall, the aim of preparing DALL·E 3 for deployment was to enhance its safety and reliability while minimizing any unwanted behaviors. By involving external experts, conducting evaluations, and implementing mitigations, OpenAI has taken important steps to ensure the responsible use of this AI system.
ChatGPT can now see, hear, and speak
OpenAI is introducing new voice and image capabilities in ChatGPT, providing users with a more interactive and intuitive interface. With voice capabilities, users can engage in voice conversations with ChatGPT, enabling a more natural and dynamic interaction. Additionally, users now have the option to share images with ChatGPT, allowing for visual discussions on various topics. These new capabilities offer more ways for users to incorporate ChatGPT into their daily lives. For instance, users can snap a picture of a landmark while traveling and engage in a live conversation about its highlights. In a culinary context, users can capture images of their fridge and pantry to determine dinner options and even ask for step-by-step recipe guidance. Moreover, these capabilities extend to educational purposes as well, enabling users to assist their children in solving math problems by sharing images and receiving helpful hints. OpenAI is rolling out voice and image capabilities to Plus and Enterprise users within the next two weeks. Voice capabilities will be available on iOS and Android through opt-in settings, while image capabilities will be accessible on all platforms.