The sperm whale ‚phonetic alphabet‘ revealed by AI

Researchers studying sperm whale communication say they’ve uncovered sophisticated structures similar to those found in human language.

In the inky depths of the midnight zone, an ocean giant bears the scars of the giant squid she stalks. She searches the darkness, her echolocation pulsing through the water column. Then she buzzes – a burst of rapid clicks – just before she goes in for the kill.

But exactly how sperm whales catch squid, like many other areas of their lives, remains a mystery. „They’re slow swimmers,“ says Kirsten Young, a marine scientist at the University of Exeter. Squid, on the other hand, are fast. „How can [sperm whales] catch squid if they can only move at 3 knots [5.5 km/h or 3.5mph]? Are the squid moving really slowly? Or are the whales stunning them with their vocalisations? What happens down there? Nobody really knows,“ she says.

Sperm whales are not easy to study. They spend much of their lives foraging or hunting at depths beyond the reach of sunlight. They are capable of diving over 3km (10,000ft) and can hold their breath for two hours.

„At 1000m (3300ft) deep, many of the group will be facing the same way, flanking each other – but across an area of several kilometres,“ says Young. „During this time they’re talking, clicking the whole time.“ After about an hour, she says, the group rises to the surface in synchrony. „They’ll then have their rest phase. They might be at the surface for 15 to 20 minutes. Then they’ll dive again,“ she says.

At the end of a day of foraging, says Young, the sperm whales come together at the surface and rub against each other, chatting while they socialise. „As researchers, we don’t see a lot of their behaviour because they don’t spend that much time at the surface,“ she says. „There’s masses we don’t know about them, because we are just seeing a tiny little snapshot of their lives during that 15 minutes at the surface.“

It was around 47 million years ago that land-roaming cetaceans began to gravitate back towards the ocean – that’s 47 million years of evolution in an environment alien to our own. How can we hope to easily understand creatures that have adapted to live and communicate under such different evolutionary pressures to ourselves?  

„It’s easier to translate the parts where our world and their world overlap – like eating, nursing or sleeping,“ says David Gruber, lead and founder of the Cetacean Translation Initiative (Ceti) and professor of biology at the City University of New York. „As mammals, we share these basics with others. But I think it’s going to get really interesting when we try to understand the areas of their world where there’s no intersection with our own,“ he says.

Read full article

Source: bbc.com, by Katherine Latham and Anna Bressanin 11.07.2024

AI tool finds cancer signs missed by doctors

An AI tool has proven capable of detecting signs of cancer that were overlooked by human radiologists.

The AI tool, called Mia, was piloted alongside NHS clinicians in the UK and analysed the mammograms of over 10,000 women. 

Most of the participants were cancer-free, but the AI successfully flagged all of those with symptoms of breast cancer—as well as an additional 11 cases that the doctors failed to identify. Of the 10,889 women who participated in the trial, only 81 chose not to have their scans reviewed by the AI system.

The AI tool was trained on a dataset of over 6,000 previous breast cancer cases to learn the subtle patterns and imaging biomarkers associated with malignant tumours. When evaluated on the new cases, it correctly predicted the presence of cancer with 81.6 percent accuracy and correctly ruled it out 72.9 percent of the time.

Breast cancer is the most common cancer in women worldwide, with two million new cases diagnosed annually. While survival rates have improved with earlier detection and better treatments, many patients still experience severe side effects like lymphoedema after surgery and radiotherapy.

Researchers are now developing the AI system further to predict a patient’s risk of such side effects up to three years after treatment. This could allow doctors to personalise care with alternative treatments or additional supportive measures for high-risk patients.

The research team plans to enrol 780 breast cancer patients in a clinical trial called Pre-Act to prospectively validate the AI risk prediction model over a two-year follow-up period. The long-term goal is an AI system that can comprehensively evaluate a patient’s prognosis and treatment needs.

Read full article

Source: artificialintelligence-news.com,by Ryan Daws 21.03.24

IKEA’s approach to delivering ethical AI for all

In 2019, IKEA created the first-ever digital ethics policy, positioning responsible AI as a core value rather than an afterthought and outlining commitment to accountability, transparency, inclusivity, diversity, and sustainability in all AI initiatives. Whilst the company’s large-scale adoption of AI has been extremely well received, IKEA management remains cognisant of the varying and very personal feelings of its employees, since amidst the tremendous opportunity, lies understandable concerns about the future.

According to Parag, “IKEA recognises the importance of listening to our 165,000 strong workforce. We are committed to upskilling and reskilling employees – and to helping everyone navigate the changing landscape of AI, while preserving human dignity and agency.”

As the company continues to pivot to its omni-business model, it remains committed to a human-centric approach – building new, equitable, inclusive solutions that uphold the brand values. Some of IKEA’s new initiatives include enhancing customer purchasing experiences, driving increased convenience through innovative supply chain solutions, and leveraging AI to help customers plan their living spaces and create better homes. IKEA is redefining the retail experience and staying true to its ethos of putting what matters most – co-workers, customers, communities, and the planet – at the forefront.

Read full article

Source: virgin.com, 01.03.2024 by Clare Kelly     

OpenAI’s GPT Store to launch next week after delays

OpenAI has announced that its GPT Store, a platform where users can sell and share custom AI agents created using OpenAI’s GPT-4 large language model, will finally launch next week.

An email was sent to individuals enrolled as GPT Builders that urges them to ensure their GPT creations align with brand guidelines and advises them to make their models public.

The GPT Store was unveiled at OpenAI’s November developers conference, revealing the company’s plan to enable users to build AI agents using the powerful GPT-4 model. This feature is exclusively available to ChatGPT Plus and enterprise subscribers, empowering individuals to craft personalised versions of ChatGPT-style chatbots.

The upcoming store allows users to share and monetise their GPTs. OpenAI envisions compensating GPT creators based on the usage of their AI agents on the platform, although detailed information about the payment structure is yet to be disclosed.

Originally slated for a November launch, the GPT Store faced delays due to the company’s busy month—including the firing and subsequent rehiring of CEO Sam Altman. Initially pushed to December, the launch date experienced further postponements.

Now, with the official announcement of the imminent launch, users eagerly anticipate the opportunity to showcase and profit from their unique GPT creations.

Read full article

Source: artificialintelligence-news.com, 01.05.2024 by Ryan Daws

AI multi-speaker lip-sync has arrived

Rask AI, an AI-powered video and audio localisation tool, has announced the launch of its new Multi-Speaker Lip-Sync feature. With AI-powered lip-sync, 750,000 users can translate their content into 130+ languages to sound as fluent as a native speaker.

For a long time, there has been a lack of synchronisation between lip movements and voices in dubbed content. Experts believe this is one of the reasons why dubbing is relatively unpopular in English-speaking countries. In fact, lip movements make localised content more realistic and therefore more appealing to audiences.

There is a study by Yukari Hirata, a professor known for her work in linguistics, which says that watching lip movements (rather than gestures) helps to perceive difficult phonemic contrasts in the second language. Lip reading is also one of the ways we learn to speak in general.   

Today, with Rask’s new feature, it’s possible to take localised content to a new level, making dubbed videos more natural.

The AI automatically restructures the lower face based on references. It takes into account how the speaker looks and what they are saying to make the end result more realistic. 

Read full article

Source: artificialintelliugence-news.com,(Editor’s note: This article is sponsored by Rask AI), 07.12.2023

Beitragsbild en

ChatGPT and Generative AI Terms You Should Know

Navigate the world of generative AI and ChatGPT using our comprehensive glossary. From ChatGPT Enterprise to one-shot learning, understand key terms shaping the future of artificial intelligence.

In the rapidly evolving world of artificial intelligence, it’s easy to get lost in the jargon. With the rise of ChatGPT and generative AI models, a whole new set of terms has emerged. Whether you’re a business leader, solopreneur, or creative, understanding these terms is crucial. Here’s a glossary to help you navigate the generative AI landscape.

Read full article

Source: aidisruptor (Alex MCFarland), 08.09.2023

LinkedIn goes big on new AI tools for learning, recruitment, marketing and sales, powered by OpenAI

LinkedIn — the Microsoft-owned social platform for those networking for work or recruitment — is now 21 years old, an aeon in the world of technology. To stay current with what the working world is thinking about most these days, and to keep its nearly 1 billion users engaging on its platform, today the company is unveiling a string of new AI features spanning its job hunting, marketing and sales products. They include a big update to its Recruiter talent sourcing platform, with AI assistance built into it throughout; an AI-powered LinkedIn Learning coach; and a new AI-powered tool for marketing campaigns.

The social platform — which pulled in $15 billion in revenues last year, a spokesperson tells me — has been slowly putting in a number of AI-based features across its product portfolio. Among them, back in March it debuted AI-powered writing suggestions for those penning messages to other users on the platform. And recruiters have also been seeing a series of tests around AI-created job descriptions and other features this year. This latest raft of announcements is building on that.

For some context, LinkedIn is not entirely new to the AI rodeo. It has, in fact, been a heavy user of artificial intelligence over the years. But until recently most of that has been out of sight. Ever been surprised (or unnerved) at how the platform suggests connections to you that are strangely right up your street? That’s AI. All those insights that LinkedIn produces about what its user base is doing and how it’s evolving? That’s AI, too.

Read full article

Source: techcrunch.com, Ingrid Lunden October 3,2023

Vinod Narayanan, Country President, AstraZeneca Malaysia, and Dr Puteri Norliza binti Megat Ramli, Deputy Director, Institut Kanser Negara, at the collaboration announcement Photo courtesy of AstraZeneca Malaysia

AstraZeneca expands AI lung screening to public hospitals in Malaysia

It has partnered with Institut Kanser Negara to promote the adoption of AI-based lung screening in public health.

AstraZeneca has tied up with Institut Kanser Negara, a centre of excellence for cancer care in Malaysia, to incorporate AI medical imaging into early lung cancer screening at government clinics and hospitals. 

WHY IT MATTERS

Lung cancer is one of the most common cancers in Malaysia, claiming about 19 deaths per 100,000 people. It is said that 8 out of 10 cases are diagnosed at Stage 4, „making early screening crucial to expedite diagnosis and treatment for patients,“ according to AstraZeneca Malaysia.

Introducing AI can scale early cancer screening at the population level. It is expected not only to improve people’s chances of survival but also to help reduce the financial burden of cancer on the country’s healthcare system. 

THE LARGER CONTEXT

AstraZeneca Malaysia and IKN are collaborating under Projek Saringan Awal Paru-Paru (SAPU), which seeks to promote the adoption of AI imaging in government healthcare facilities. 

The initiative represents the third phase of AstraZeneca’s flagship early lung cancer screening programme, Lung Ambition Alliance (LAA). Launched in 2021, it first equipped private primary care clinics under the Qualitas Group with Qure.ai’s imaging software to conduct lung cancer screening. The following year, the second phase of the programme involved three private tertiary hospitals where patients were referred for further diagnosis using low dose CT scan. 

Over the past three years, the LAA has screened nearly 19,000 patients and referred over 400 high-risk patients to hospitals for further investigation. 

This year, Projek SAPU will provide AI technology as part of a pilot study at selected government hospitals and clinics. Sandbox sites have also been designated to collate data on the results of the project, which will be used to gain an understanding of Malaysia’s local landscape and disease demographics. 

The project also goes beyond lung cancer screening; it also aims to screen for other lung-related diseases, such as COVID-19, tuberculosis, and lung fibrosis. 

Read full article

Source: healthcareitnews.com, Adam Ang, 26.09.2023

ChatGPT update will give it a voice and allow users to interact using images

The move will bring the artificial intelligence chatbot closer to popular voice assistants such as Apple’s Siri and Amazon’s Alexa

OpenAI’s ChatGPT is getting a major update that will enable the viral chatbot to have voice conversations with users and interact using images, moving it closer to popular artificial intelligence (AI) assistants like Apple’s Siri.

The voice feature “opens doors to many creative and accessibility-focused applications”, OpenAI said in a blog post on Monday.

Similar AI services like Siri, Google voice assistant and Amazon’s Alexa are integrated with the devices they run on and are often used to set alarms and reminders, and deliver information off the internet.

Since its debut last year, ChatGPT has been adopted by companies for a wide range of tasks from summarizing documents to writing computer code, setting off a race among big tech companies to launch their own offerings based on generative AI. Google has imminent plans to launch its answer to ChatGPT, called Gemini, which is reportedly already being tested by a small group of companies. Amazon, for its part, announced on Monday it would be investing up to $4bn in the AI startup Anthropic to provide support and boost the e-commerce company’s generative AI efforts.

ChatGPT’s new voice feature can also narrate bedtime stories, settle debates at the dinner table and speak out loud text input from users.

The technology behind it is being used by Spotify for the platform’s podcasters to translate their content into different languages, OpenAI said.

With image support, users can take pictures of things around them and ask the chatbot to “troubleshoot why your grill won’t start, explore the contents of your fridge to plan a meal, or analyze a complex graph for work-related data”.

Alphabet’s Google Lens is currently the popular choice to gain information about images.

We do not receive any profit for the written article, no participation from affiliate programs. We only express our opinion.

Read full article

Source: theguardian.com, Reuters 25.09.2023

Microsoft unveils AI-powered Copilot for Windows 11

Microsoft has revealed a new set of artificial intelligence-powered solutions across its products, kicking off with Windows 11 on Sept. 26.

Microsoft has taken another step toward integrating artificial intelligence (AI) technology into its products. On Sept. 21, the company announced Microsoft Copilot, which merges interfaces on Windows with language models. 

According to Microsoft’s announcement, the solution will work as an app or reveal itself to users by right-clicking. It will be available as enhancements on popular apps like Paint, Photos and Clipchamp. Across other products, search engine Bing will be supported by OpenAI’s new DALL-E 3 model, while Microsoft 365 Copilot will integrate a chat assistant for enterprise solutions.

“We are entering a new era of AI, one that is fundamentally changing how we relate to and benefit from technology,” Microsoft stated in the announcement. An early version of Copilot will be available as a free Windows 11 update starting Sept. 26 and across Bing, Edge and Microsoft 365 later this year, said the company.

https://www.youtube.com/embed/5rEZGSFgZVY One of the tech giant’s bets is its Microsoft 365 Copilot, designed to assist users and enterprises with repetitive tasks, such as writing documents, summarizing and presentations. The solution works through Microsoft’s traditional applications — such as Word, Excel and PowerPoint — and costs $30 a month per user, on top of the subscription fee for accessing Microsoft 365 apps.

Read full article

source: cointelegraph.com, Ana Paula Pereira, 22.09.2023