Founded by Layla Li and Sonali Sanghrajka, KOSA AI is headquartered in Nairobi and the company helps solve this challenge by helping companies to detect, audit, and hopefully solve the AI bias in their systems.
KOSA AI also helps to implement corrective measures to address or mitigate the bias. Additionally, the company also can support enterprises to monitor their AI models after deployment to cancel out any bias.
Kosa Al was started in Nairobi, Kenya. Two founders met at Antler Nairobi Cohort 3, and the passion for driving positive impact through technology brought us here.
They are currently based in Amsterdam, the Netherlands, and Nairobi, Kenya. They believe building a distributed and diverse workforce will help us to achieve our mission of enabling more inclusive technologies.
With companies across the world increasingly adopting AI solutions, a challenge to enterprise AI adoption is the limited in-house expertise to build and develop AI models, and manage data complexities.
One consequence of this is AI bias, affecting large and small enterprises alike.
AI bias is the manifestation of systemic prejudice and unfairness in the results of artificial intelligence algorithms.
This can arise from incorrect training or real-world data that is used to develop the AI, or from the conscious or subconscious bias of developers creating the AI.
KOSA AI helps solve this challenge, helping enterprises to detect, audit, and explain bias in their AI models, and then implement corrective steps to address or mitigate the bias.
Additionally, the startup can support the enterprise in monitoring its AI models post-deployment for any drift toward bias.
How it Works
Algorithm Discrimination
Inaccurate predictions in the AI algorithms due to faulty and/or poor data. These predictions reflect societal prejudices that are translated into the AI systems in the form of bias and discrimination against certain groups of people.
For example, In 2015, it was found that the algorithm used for hiring employees at Amazon is biased against women. This AI algorithmic bias was present as its hiring dataset was filled only with men’s resumes.
Al Bias
Incomplete training data and prejudiced assumptions made during algorithm development.
The data that is used to train the algorithms in the software is based on historical data sets that reflect the bias accumulated throughout history.
For example, UnitedHealth’s Optum AI system showed drastic AI racial bias denying care to 46% of qualifying black patients due to the inaccurate assumption that those who incur the higher costs need crucial care most.
However, Black patients spend less on medical costs per year than White patients, leading AI to make biased decisions.
Mitigation
By systematic inclusion of ethical values, principles, requirements, and procedures into the AI design, development processes, deployment, and the entire AI pipeline, KOSA builds software solution that embodies ‘Ethics by design’ and offers tools to every stakeholder related to it, the technical and also the non-technical stakeholders, to minimize risks and maximize AI utility.
Explainable Al
The explainable AI is put into practice to better understand why an automated system behaves the way it does, but not really to prevent ethical issues from happening.
Therefore, their “ethics by design” approach oversees an enterprise’s entire AI governance program.
Use Cases
Kosa Al is industry agnostic, but right now they are most interested in healthcare, and they work with tabular, image, and text data.
Some use cases include improving diagnostic accuracy through bias mitigation in skin cancer triage models and increasing your talent diversity through correcting gendered wording in your job description.
Caring
Their Automated Responsible AI System future-proofs AI systems against compliance risks and algorithmic discriminations so that companies can have an edge over their competitors, build products for the future, and increase revenues.
There are fines of up to 4% of global annual turnover or €20M for a set of risky AI use cases.
Also, companies lost one-third of revenue from affected customers in the year following a data misuse incident.
Founders
Layla Li
Layla Li is the Co-founder and the CEO at Kosa.
Layla Li has worked across four continents with over seven years of building tech solutions at Philips, Tesla, and more.
Additionally, Layla Li enjoys learning about new cultures and languages over a drink.
Sonali Sanghrajka
Sonali Sanghrajka is the co-founder at Kosa Al.
Sonali Sanghrajka has over ten years of experience in serving the Healthcare sector, creating a brand strategy for the EMEA region, and commercializing products worth $500M, in private and public settings.
Investors & Funding Rounds
EchoVC Partners, APX, The Continent Venture Partners, and Angel investors
KOSA AI, an automated responsible AI systems platform, has raised an undisclosed pre-seed funding round to help it grow.
To scale its platform, the startup has raised a pre-seed funding round led by EchoVC Partners and also includes APX, Dale Matthias, Fineday Ventures, TheContinent Venture Partners, and Arch Capital.
The company will use the funds to develop a responsible AI system platform that will help multiple companies across sectors like healthcare, hiring and HR, credit and insurance, and hiring to deploy more inclusive and sophisticated technology.
Main Competitors
Vette: This is an instant human engagement platform that allows companies to spend less time leaving voicemails and more time onboarding new hires.
Locum’s Nest: It is an award-winning software platform that connects healthcare professionals to vacant work through innovative technology.
Related:
Ctrl: Story, Founders, Investors & Funding Rounds