Apple’s mixed reality headgear will debut at WWDC 2023

Apple will reportedly unveil the long-rumored mixed reality headgear at this year’s Worldwide Developers Conference (WWDC), traditionally held in June 2023, according to the latest report from Bloomberg

<img alt="Apple-headgear" data- data-src="https://kirelos.com/wp-content/uploads/2023/03/echo/Apple-headgear.png" data- decoding="async" height="453" src="data:image/svg xml,” width=”681″>

The unveiling date has been delayed multiple times due to technological difficulties. It was initially supposed to launch in the spring, but it will now be announced at WWDC before going on sale at the end of 2023.

It is believed that the company’s headset, rumored to be called the “Reality Pro,” will be a very potent device that resembles ski goggles and can broadcast 3D content. It has many capabilities, like advanced hand tracking, the ability to render FaceTime callers realistically, a digital crown that lets you leave virtual reality, and more.

According to the latest available information, Apple aims to sell the headgear for roughly $3,000, making it the most pricey wearable item made by the company to date.

As we all know, the release date of such ground-breaking products can change, but this time, Apple hopes to make the product available by the end of 2023. If everything goes as planned, Apple MR headwear will most likely make its international launch at WWDC, and the gadget may arrive in stores in late 2023.

Apple’s mixed reality headset may not gain popularity like the other Apple products as soon as it debuted. Still, according to numerous tech opinions, Apple’s mixed reality headset has the potential to grow and may become a more significant revenue generator for Apple in the near future.

Microsoft, Alphabet, and other companies are spending billions of dollars on developing and implementing AI technology to gain market share across various industries.

<img alt="Google-BARD" data- data-src="https://kirelos.com/wp-content/uploads/2023/03/echo/Google-BARD.png" data- decoding="async" height="334" src="data:image/svg xml,” width=”800″>

To compete with ChatGPT from OpenAI (Microsoft invested billions of dollars in this firm), the Parent company of Google, Alphabet, launched an AI service powered by LaMDA, Bard AI.

Google recently announced several AI integrations to enable its Workspace products’ machine learning to generate text and graphics. Several future generative AI features for Google’s Workspace apps, which include Google Docs, Gmail, Sheets, and Slides, have been announced. 

The features include:

  • New ways for Google Documents’ AI to brainstorm, summarize, and generate content.
  • The capacity for Gmail to create whole emails from users’ quick bullet points.
  • The ability for Google Slides to create AI images, audio, and video to illustrate presentations.

The company demonstrated how cloud clients could use AI to send emails to coworkers, produce presentations and sales-training documents, and take notes during meetings. Developers can now create applications utilizing Google’s technology due to the company’s release of some of its underlying AI models.

ChatGPT debuted last year, and Microsoft released their Bing chatbot-enabled search engine in February 2023. Ever since, Google has shown determination to overtake competitors in the new AI race by announcing a series of AI updates to their products. 

A set of US-based “trusted testers” will have access to several new AI features that the company has introduced this month. Google claims these and other capabilities would be made accessible to the public later in the year but didn’t announce the exact date.

OpenAI announces GPT-4 capable of accepting text or image inputs

OpenAI released the latest GPT-4 version in its research blog. According to the report, GPT-4 is a sizable multimodal system that can accept image and text inputs while only producing text output.

<img alt="YouTube video" data-pin-nopin="true" data-src="https://kirelos.com/wp-content/uploads/2023/03/echo/maxresdefault.jpg6411b149eb8d7.jpg" height="720" nopin="nopin" src="data:image/svg xml,” width=”1280″>

After months of whispers and speculation, OpenAI has finally revealed GPT-4, a revolutionary improvement in problem-solving ability.

Open AI asserts GPT-4 is “more innovative and collaborative than ever before” and is “more accurate in tackling difficult situations.” Although it can understand both text and image input, only text responses are produced.

GPT-4 had undergone six months of safety training, according to OpenAI, which also stated that in internal tests, it was “82% less likely to reply to requests for restricted content” and “40% more likely to offer factual responses than GPT-3.5.”

In an unofficial conversation, OpenAI claimed that there is a “subtle” difference between GPT-4 and its predecessor GPT-3.5 (GPT-3.5 is the model that powers ChatGPT). Sam Altman, CEO of OpenAI, stated that GPT-4 “is still imperfect, still restricted,” adding that it “seems more remarkable on first use than it does after you spend more time with it.”

The systems still have many of the same issues, including being less effective than humans in many real-world circumstances and performing at human levels on various professional and academic criteria, cautions OpenAI.

The public can access the updated GPT-4 via ChatGPT Plus, OpenAI’s $20 per month ChatGPT membership, which powers Microsoft’s Bing chatbot. It will be available as an API that programmers can use to build upon.