Responsible AI refers to the design, development and application of AI systems in an ethical, fair and safe manner. Developers and users must ensure that AI systems align with human values, respect fundamental rights and promote societal well-being. The EU AI Act also embeds responsible AI principles in legislation, such as privacy, non-discrimination and transparency. To implement AI responsibly, employees who work with AI must also have adequate knowledge, known as “AI literacy”. They need to be aware of both the opportunities and risks of AI.
Responsible AI is therefore a broad concept, and we deliberately interpret “responsible” broadly: AI with an eye for people and society. What does this mean in practice? We develop useful AI with a focus on privacy, data security, sustainability, ethics and explainability. Together with you, we determine what the AI should do to support you as good as possible, with the principle that the AI is useful and controllable. Additionally, we enthusiastically involve users in workshops and presentations to increase AI literacy. Our commitment to responsibility is also refelcted in our working methods: we communicate clearly, honestly and transparently.