Lack of Talent and Expertise: Implementing AI requires skilled professionals—from data scientists to machine learning engineers. Such specialists are scarce in Moldova, and competition for them is high. According to surveys, over 56% of companies in Central and Eastern Europe report difficulties hiring qualified AI talent
(AI and the Future of Work: What Moldovan Companies Need to Know).Moldovan universities are only beginning to offer specialized programs; there's a clear need for courses in data analysis, neural network programming, and AI ethics. Neighboring Romania has already launched a master's program in artificial intelligence at the Polytechnic University of Bucharest to address its talent gap—Moldova would do well to follow suit.
In addition, it's important to upskill the current workforce: managers need training in data literacy, analysts should learn modern AI tools. Businesses can invest in training and online courses for their employees. Without solving the talent issue, AI adoption risks slowing down—even off-the-shelf tools require knowledgeable users.
Limited Access to Technology and Data: Most cutting-edge AI developments are created abroad and require substantial computing power. Training complex models demands powerful servers (GPU/TPU clusters) and large volumes of data. Many Moldovan companies lack both the infrastructure and sufficient local data.
Cloud technologies partly solve this problem—resources can be rented from major providers (AWS, Azure, Google Cloud), and AI services can be accessed via subscription. However, challenges remain: high-speed internet access, the cost of such services, and data security concerns.
Small businesses may benefit from ready-made cloud solutions (like ChatGPT or AI features in office apps), as they are relatively affordable. But larger companies wanting to develop their own models will need to invest either in local infrastructure or expensive cloud capacities.
Another important factor is
data. For AI to work well, it needs to be trained on high-quality, diverse datasets. Moldova's markets are relatively small, so local businesses may benefit from
data sharing (e.g., banks could create anonymized joint databases for training anti-fraud systems) or using
public global datasets. This brings
data privacy and regulatory compliance into focus (see next point).
Regulation and Ethics: New technologies bring new risks, and AI is no exception. Top concerns include client data privacy, protection from leaks, and
transparency of AI decisions (understanding how and why a decision was made).
The European Union has already passed a specific
AI Act—a comprehensive set of rules that sets standards for algorithm usage in areas ranging from autonomous vehicles to recruitment (
EU AI Act – EUR-Lex).
One of its goals is to prevent abuse and discrimination through AI. The new regulations require companies to
assess risks associated with their AI systems, document data sources and algorithms, and—when dealing with high-risk use cases—obtain special authorizations.
Moldova, in its pursuit of EU alignment, will likely adopt similar regulations in time. Therefore, Moldovan businesses planning to actively use AI must start accounting for regulatory requirements now. It’s already crucial to align systems with existing
personal data protection laws.In fact, 27% of AI-active firms in Central and Eastern Europe report difficulties due to regulations—particularly around privacy and fears of job displacement
(AI and the Future of Work: What Moldovan Companies Need to Know).Beyond legal concerns, there are
ethical ones: society expects that new technologies will be used responsibly. Businesses should preemptively define their AI usage policies—for example, specifying situations where a human must always make the final decision (e.g., safety, health, major financial transactions).
Transparency and accountability in AI deployment will help earn trust from both customers and partners. In short, working with regulators should not be seen as a burden but a smart business move: by following the rules, companies protect themselves from risks and build a trustworthy reputation in the emerging AI economy.