New York, Nov 29 (IANS) People with visual impairments often face difficulty to understand memes, but now researchers have developed a method to automatically identify memes and apply pre-written templates to add descriptive alt text, making them intelligible via existing assistive technologies.
The study was presented at the ACCESS conference in Pittsburgh, US.
Visually impaired people use social media like everyone else, often with the help of screen reader software. But that technology falls short when it encounters memes, which don’t include alternate text, or alt text, to describe what’s depicted in the image.
“Memes are images that are copied and then overlaid with slight variations of text. They are often humorous and convey a shared experience, but “if you’re blind, you miss that part of the conversation,” said study researcher Cole Gleason from Carnegie Mellon University in US.
Memes largely live within social media platforms that have barriers to adding alt text.
Twitter, for example, allows people to add alt text to their images, but that feature isn’t always easy to find. Of 9 million tweets the researchers examined, one million included images and, of those, just 0.1 per cent included alt text.
Researchers said that basic computer vision techniques make it possible to describe the images underlying each meme, whether it be a celebrity, a crying baby, a cartoon character or a scene such as a bus upended in a sinkhole.
Optical character recognition techniques are used to decipher the overlaid text, which can change with each iteration of the meme.
For each meme type, it’s only necessary to make one template describing the image, and the overlaid text can be added for each iteration of that meme.
But writing out what the meme is intended to convey proved difficult.
“It depended on the meme if the humor translated. Some of the visuals are more nuanced, and sometimes it’s explicit and you can just describe it,” Gleason said.
The team also created a platform to translate memes into sound rather than text. Users search through a sound library and drag and drop elements into a template.
This system was made to translate existing memes and convey the sentiment through music and sound effects.
“One of the reasons we tried the audio memes was because we thought alt text would kill the joke, but people still preferred the text because they’re so used to it,” Gleason said.
The researchers are currently working on related projects, including a browser extension for Twitter that attempts to add alt text for every image and could include a meme system.