Upholding the Integrity of Facts in a Post-Truth Society

Blog post by Annica Ryng, member of Parsd team, also published on annicaryng.com.


Since ChatGPT, DALL-E, Midjourney and other AI tools entered our homes, we have entertained ourselves with polishing our writing, generating new ideas or creating images. AI has had us both fascinated and frightened, captured and overwhelmed. Some say that AI poses a threat to humanity. Others claim it’s imminent: AI will extinguish humanity. I sat down with my friend Alexandra, who shares my passion for technology for good, to discuss AI. In 2021, Alexandra founded Parsd, an AI tech start-up that I had the opportunity to be a part of, from the conceptual idea to the initial product development. One thing she said stayed with me:

What is going to happen is that people who are not capable of using AI will be replaced by those who are. There will be a new digital divide. Some people will be able to use AI efficiently and responsibly and others will either feel it is like magic or mistrust it all.

Alexandra Kafka Larsson

The starting point of Parsd was our belief in democratic societies where citizens can access trusted information to make informed decisions about both matters of governance and what to spend their money on. To maintain integrity of facts in a post-truth society, there is a need for analytical tools that help to deliver insights based on a common set of facts and references of what is or is not happening. Moreover, facts are not enough. Insights based on facts must be told as a visual story, emotive and memorable, to matter to others. We wanted to optimise the workflow for researchers, journalists, students and others to have more time to think, analyse and present. Together with Blockzero, a digital design and communication studio, we built the visual identity and first content. Within 3-years, Parsd won a prestigious 1-year incubator programme by Sting, a leading community of entrepreneurs, and was invited to WASP, Wallenberg AI Autonomous System and Software Programme. The Parsd We App is currently in a beta-testing phase.

I asked Alexandra 4 questions about Parsd and her views on AI, today and in the future.

1. What problems does Parsd try to solve?

We are right now fighting an uphill battle in terms of a high volume of data, stored in different places, which is firstly, too large to humanely process, secondly, too large to process with high quality, and thirdly, too hard to process with tight deadlines. Although research isn’t always fact-checked, we believe it must hold for scrutiny. What Parsd tries to do is to solve this challenge by developing a digital research hub, where you can bring all your own data, both owned and accessed behind paywalls, to one place. Parsd is AI-powered to help you to consolidate, organise and process unstructured data from documents, videos, sound recordings, and images. The Parsd AI chat enables you to slice and dice through all your data, while keeping track of your sources that can be labelled as trusted or not. It’s a digital tool to manage all kinds of sources, instead of an app only for trustworthy sources.

2. How does Parsd use AI today and what is the potential for the future?

Today, Parsd uses several AI foundation models as well as “traditional” AI-models, which are a few years old that run language understanding, entity extraction, semantic analysis etc., i.e. tools to parse text. The chat can be used to ask questions about your text. It can also be used to suggest questions about your text and propose hypotheses. This may in fact help to overcome your own biases. If you use AI to ask questions only about your own research material, you do not risk that it hallucinates the way ChatGPT, for example, may do. Last year, the big thing was the Large Language Models and the chat interface. We started with text-based LLMs, now we have Multi-Modal Models that combine text, motion, video, sound, music, 3D etc. The models can now translate between the different media, so the question is how we can combine them in ways we thought were impossible before. The next thing will be the AI agents that subtract steps, execute and scale work much better.

3. What are the challenges for an AI tech startup today?

The investment climate is harder now than it was a few years ago. In the past, a startup could get funding on an idea. Nowadays, you are required to prove your business model and its metrics. So, you must develop your product further, which becomes more exposed to regulations. Competing with the big tech companies is, of course, also challenging. It is still possible to develop your own AI, but it requires talent which is really hard to find and expensive. Hardware is also both difficult to access and expensive.

4. What are you most excited about and what do you fear the most about AI?

We have the potential to do good with the data we gather, and do things we thought were science fiction a few years ago in terms of solving hard and complex problems much faster. What we need to keep a watchful eye on is all the data which is generated by AI. The risk is that we create a feedback loop where we are training on data that is generated by AI. We end up in a situation where it is hard to know what to trust, and in the worst-case scenario, we no longer trust anything or anyone. That is scary.

From a wider perspective, the EU is trying its utmost to act fast and not miss the train to mitigate risks with AI. The new EU AI Act aims to manage high risk technology with transparency requirements for foundation models. The EU AI Office’s objective is to support the implementation of the Act with the help of a code of conduct, guidelines and advisors, and an AI Pact should encourage companies to comply ahead of time for transposition. Civil society called for stronger protection for rights holders’ security and space. Companies wanted less constraints, even Prime Minister Macron sided with the French AI company Mistral to give it wind in the back in the race for tech innovation (Politico). Ultimately, the EU AI Act is a first of its kind, agreed on by 27 EU-countries. In the best case scenario there might be a ‘Brussels effect’ that encourages other countries to follow the lead. In the worst case, the technology will outpace the regulation, writes Euractiv. Nevertheless, the EU alone cannot govern AI. 

Globally, leaders of Western democracies and industrialised economies are pressured to agree on a common rulebookfor technology that serves humanity, if not, China may step in and the development will take a very different turn, Politico reports. The UN Assembly also urged states to be vigilant and develop ways to ensure that AI is used safely and securely, and in respect of international human rights law.

It is exciting that AI is developing fast, you can access new technologies and possibilities that weren’t there just a few weeks ago. On the other hand, a lot can change rapidly, and you can end up focusing on the wrong thing.

Alexandra Kafka Larsson

I agree with Alexandra. Our shared challenge is to stay updated, relevant, and make sure we focus on the right things. We need innovative and creative solutions to address not only the risks with AI, but also to look at its ethical and societal impact, such as bias, fairness, accountability and transparency. Jobs must be reinvented, the digital divide narrowed and principles of authorship and remuneration for human creation must be rethought. While AI is superior in reproduction, human’s superpower is its originality. After all, Einstein said it about 100 years ago already, “we cannot solve our problems with the same thinking we used when we created them”.


Leave a comment

Your email address will not be published. Required fields are marked *