We are innovation pioneers. We invest in research to provide the most advanced solutions on ML and AI to our business clients and research collaborators.
Our Role: LIBRA performs Machine Learning based measurement and verification activities to model buildings' energy footprints. The aim is to predict their future energy consumption and offer an independent assessment of the effectiveness of the interventions that PRELUDE partners will apply in the project.
Our approach will be evaluated in 8 different building sites of several sizes and functionalities in four European countries, Switzerland, Italy, Poland, Denmark, and Greece.
Duration: Dec 2020 – May 2024
Call Identifier: LC-EEB-07-2020
This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA N° 958345.
Our Role: LIBRA develops the project’s Immersive Business Intelligence system - METAMINE – of immersive dashboards with virtual control rooms and a real-time alerting system and leads the Machine Learning Operations (MLOps) for the MASTERMINE ML/AI systems, being responsible, among others, for the design and development of an AI-assisted data mapping framework.
Our approach will be demonstrated in five EU demo cases in Spain, Greece, Finland and Poland and one replication demo in South Africa.
Duration: Dec 2022–Nov 2026
Call: HORIZON-CL4-2022-RESILIENCE-01
Our Role: LIBRA develops the AI-Enabled business intelligence system of the project that integrates a range of advanced predictive modeling and optimization tools developed by the whole consortium and provides insights and analytics of all the critical mining processes and operations, including productivity, monitoring, employee safety, environmental monitoring, etc.
The system will be deployed in 5 (five) mines.
Duration: May 2020 – April 2024
Call Identifier: SC5-09-2018-2019
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under GA No. 869529.
Our Role: GEOSS is the Global Earth Observation System of Systems that covers many independent Earth observation systems and providers, e.g. satellites, drones, and in-situ sensors, at a global scale. LIBRA is developing the cognitive search engine for GEOSS data based on advanced AI-enabled Natural Language Processing (NLP). This search engine will empower scientists and stakeholders in the Climate Change domain, allowing them to search effectively for data and information using natural language for their queries in a fashion similar to the one we all use in Google search for general inquiries. Moreover, LIBRA will provide tools for the multifaced enrichment and curation of the GEOSS metadata. LIBRA is also the EIFFEL's exploitation manager coordinating the exploitation effort of the project's consortium.
Duration: June 2021 – May 2024
Call Identifier: H2020-LC-CLA-2020-2
This project has received funding from the European Union's Horizon 2020 research and innovation programme under GA No. 101003518.
Client's name and type: Coffee Island (retailer with more than 400 franchise shops)
Industry: Food and beverage
Project: Ecommerce data analysis and Customer Segmentation based on Sales Data
Tools developed: Data warehouse design for collection and analysis of all the e-commerce data of the chain. We built a customer segmentation framework (by RFM analysis) based on the customers' buying behavior dedicated to inspection and direct marketing campaigns. We also created an e-Commerce sales dashboard set to enable a 360o prompted inspection and performance monitoring of the chain's stores and products.
Challenges met: Handling a vast amount of data from several different resources to develop a tool that helps CI team to build its chain's strategic roadmap.
Technologies used: Apache NiFi, MySQL, Postgresql, Python, Pandas, Tableau Online, RFM analysis
Client's name and type: National Bank of Greece (Financial Institution)
Industry: Finance
Project: Machine Learning / Natural language processing competition (Link)
Tools developed: LIBRA co-developed and supported the NBG’s Information retrieval in court decisions challenge.
This included the development of a hybrid pseudo anonymizer to remove sensitive data from the court decisions (OCR scanned documents in the Greek Language). The anonymizer was a hybrid solution, utilizing NLP techniques to identify sensitive entities (such as NER and POS), as well as pattern matching and term frequency related techniques to achieve best performance.
Additionally, we:
Challenges met: Automatic question/answering in court decisions based on cutting-edge NLP models. Information Extraction in the form of passage retrieval. Annotation of very large documents requiring domain-specific expertise.
Technologies used:NLP and Deep Learning frameworks such as spaCy, nltk, gensim, TensorFlow, PyTorch.
Client's name and type: Greca (Tourist Agency)
Industry: Tourism
Project: Personalized Product ranking Recommendation System supported by advanced A/B testing
Tools developed:
We developed a Recommendation System for our client's products driven by web behavioral data. The system includes various recommendation engines for the different sections of our client's website, such as product re-ranking recommendations. To develop the recommendation system, we started off with data analysis to determine parameters like visit duration and product inspection journeys, in order to design preference profiles among products and traveling destinations that were used to enrich the personalized recommendations we generated. The system provides recommendations both for new and returning visitors using different strategies. In the whole architecture, we had to develop a solution to store log data of recommendations that would provide the required data integrity to perform advanced A/B test experiments that validate the recommendation strategies. Finally, we developed an API-based end-point to provide recommendations to our client whenever a visitor visited a web page of interest.
Challenges met:
As a third party providing this solution, we had to develop a series of tests to validate the successful integration of the system, assuring the quality of user experience. This process included heavily analyzing web behavioral data and monitoring in real-time the performance of the API we deployed. Keeping track of product changes was also a major challenge requiring regular recommendation updates.
Technologies used: Apache NiFi, Μatomo, Python, NodeJS, PostgreSQL.
Client's name and type: Greca (Tourist Agency)
Industry: Tourism
Project: MatTech analytics framework with Algorithmic Attribution Modeling & Anomaly Detection
Tools developed:
We developed a Data warehouse gathering the e-commerce data, web behavioural data and digital campaigns data. We delivered a custom marketing analytics Business Intelligence dashboard suite that supports advanced algorithmic attribution modelling and anomaly detection for Ads spending optimization.
Challenges met: To thoroughly monitor e-commerce fused with web behavioural data along the full customer journey across all touchpoints and perform impactful marketing campaigns.
Technologies used: Apache NiFi, MongoDB, Postgresql, Python, Pandas, Tableau Online.
Client's name and type: PNOE (Start-Up)
Industry: Health Care / Fitness
Project Type: Streaming processing and analytics, on-the-fly ETL, scalability on-demand, Fast OLAP Database
Solutions developed: PNOE project required real-time processing, transformation and analytics of big multi-user streaming biometric data. The aim is to provide real-time analytics and demanding algorithmic calculations on several health/fitness metrics displayed on PNOE mobile app. Such requirements cannot be covered with old fashioned data warehousing, i.e. you collect all data and perform the aggregations and data transformations overnight. Our solution stack was designed to allow on-the-fly, true stemming data processing through Kafka and Flink and to seamlessly support scaling, both in terms of the evolving app complexity and the growing user base via Kubernetes orchestration in AWS.
Challenges met: The task involved both real-time streaming data as well as out-off time, retrospective streams. This is a challenging situation for and Flink's functionality had to be extended to support both data modalities in a common pipeline. Also, the ETL has been fine-tuned to prevent the memory/processing load from growing out of proportion.
Technologies used: Apache Kafka; Apache Flink; Scala; Kubernetes.
Client's name and type: Alpha Marine (Marine Consultancy Firm)
Industry: Maritime
Project: Visualization engine for the company's Deep Dive Project and causality/regression analysis.
Tools developed: We developed bespoke visual analytics and data science studies for the Alpha Marine's Deep Dive Projection Shipping Companies' Culture. A major deliverable was the Deep Dive data Parser, which takes care of the data retrieval stage. Special care has been given for the Deep Dive data Parser to tackle all those inconsistencies among the surveys in the most generalizable way. The static visual analytics approach followed, where an extensive set of diverging infographics have been designed and programmed in Python to visualize the information and get insights inherent in the data related to the data produced through the Deep Dive surveys. We also provided automated trend analysis and sparse regression analysis tools and approaches.
Challenges met: Before any analysis, the data must be retrieved, prepared, and homogenized in a shape that is liable for analytics In the Deep Dive case. This poses particular challenges, mainly due to the multimodality of survey response tools (both survey monkey and custom Excel worksheet).
Technologies used: Python, Pandas, Matplotlib, Scikit, Tableau
Below we present a sample of our work for selected corporations, SMEs and start-ups.
These case studies are just a fraction of how our AI solutions can boost your product, optimize your operations and give you a step forward in a competitive international environment.
Our Role: LIBRA performs Machine Learning based measurement and verification activities to model buildings' energy footprints. The aim is to predict their future energy consumption and offer an independent assessment of the effectiveness of the interventions that PRELUDE partners will apply in the project.
Our approach will be evaluated in 8 different building sites of several sizes and functionalities in four European countries, Switzerland, Italy, Poland, Denmark, and Greece.
Duration: Dec 2020 – May 2024
Call Identifier: LC-EEB-07-2020
This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under GA N° 958345.
Our Role: LIBRA develops the project’s Immersive Business Intelligence system - METAMINE – of immersive dashboards with virtual control rooms and a real-time alerting system and leads the Machine Learning Operations (MLOps) for the MASTERMINE ML/AI systems, being responsible, among others, for the design and development of an AI-assisted data mapping framework.
Our approach will be demonstrated in five EU demo cases in Spain, Greece, Finland and Poland and one replication demo in South Africa.
Duration: Dec 2022–Nov 2026
Call: HORIZON-CL4-2022-RESILIENCE-01
Our Role: LIBRA develops the AI-Enabled business intelligence system of the project that integrates a range of advanced predictive modeling and optimization tools developed by the whole consortium and provides insights and analytics of all the critical mining processes and operations, including productivity, monitoring, employee safety, environmental monitoring, etc.
The system will be deployed in 5 (five) mines.
Duration: May 2020 – April 2024
Call Identifier: SC5-09-2018-2019
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under GA No. 869529.
Our Role: GEOSS is the Global Earth Observation System of Systems that covers many independent Earth observation systems and providers, e.g. satellites, drones, and in-situ sensors, at a global scale. LIBRA is developing the cognitive search engine for GEOSS data based on advanced AI-enabled Natural Language Processing (NLP). This search engine will empower scientists and stakeholders in the Climate Change domain, allowing them to search effectively for data and information using natural language for their queries in a fashion similar to the one we all use in Google search for general inquiries. Moreover, LIBRA will provide tools for the multifaced enrichment and curation of the GEOSS metadata. LIBRA is also the EIFFEL's exploitation manager coordinating the exploitation effort of the project's consortium.
Duration: June 2021 – May 2024
Call Identifier: H2020-LC-CLA-2020-2
This project has received funding from the European Union's Horizon 2020 research and innovation programme under GA No. 101003518.
Client's name and type: Coffee Island (retailer with more than 400 franchise shops)
Industry: Food and beverage
Project: Ecommerce data analysis and Customer Segmentation based on Sales Data
Tools developed: Data warehouse design for collection and analysis of all the e-commerce data of the chain. We built a customer segmentation framework (by RFM analysis) based on the customers' buying behavior dedicated to inspection and direct marketing campaigns. We also created an e-Commerce sales dashboard set to enable a 360o prompted inspection and performance monitoring of the chain's stores and products.
Challenges met: Handling a vast amount of data from several different resources to develop a tool that helps CI team to build its chain's strategic roadmap.
Technologies used: Apache NiFi, MySQL, Postgresql, Python, Pandas, Tableau Online, RFM analysis
Client's name and type: National Bank of Greece (Financial Institution)
Industry: Finance
Project: Machine Learning / Natural language processing competition (Link)
Tools developed: LIBRA co-developed and supported the NBG’s Information retrieval in court decisions challenge.
This included the development of a hybrid pseudo anonymizer to remove sensitive data from the court decisions (OCR scanned documents in the Greek Language). The anonymizer was a hybrid solution, utilizing NLP techniques to identify sensitive entities (such as NER and POS), as well as pattern matching and term frequency related techniques to achieve best performance.
Additionally, we:
Challenges met: Automatic question/answering in court decisions based on cutting-edge NLP models. Information Extraction in the form of passage retrieval. Annotation of very large documents requiring domain-specific expertise.
Technologies used:NLP and Deep Learning frameworks such as spaCy, nltk, gensim, TensorFlow, PyTorch.
Client's name and type: Greca (Tourist Agency)
Industry: Tourism
Project: Personalized Product ranking Recommendation System supported by advanced A/B testing
Tools developed:
We developed a Recommendation System for our client's products driven by web behavioral data. The system includes various recommendation engines for the different sections of our client's website, such as product re-ranking recommendations. To develop the recommendation system, we started off with data analysis to determine parameters like visit duration and product inspection journeys, in order to design preference profiles among products and traveling destinations that were used to enrich the personalized recommendations we generated. The system provides recommendations both for new and returning visitors using different strategies. In the whole architecture, we had to develop a solution to store log data of recommendations that would provide the required data integrity to perform advanced A/B test experiments that validate the recommendation strategies. Finally, we developed an API-based end-point to provide recommendations to our client whenever a visitor visited a web page of interest.
Challenges met:
As a third party providing this solution, we had to develop a series of tests to validate the successful integration of the system, assuring the quality of user experience. This process included heavily analyzing web behavioral data and monitoring in real-time the performance of the API we deployed. Keeping track of product changes was also a major challenge requiring regular recommendation updates.
Technologies used: Apache NiFi, Μatomo, Python, NodeJS, PostgreSQL.
Client's name and type: Greca (Tourist Agency)
Industry: Tourism
Project: MatTech analytics framework with Algorithmic Attribution Modeling & Anomaly Detection
Tools developed:
We developed a Data warehouse gathering the e-commerce data, web behavioural data and digital campaigns data. We delivered a custom marketing analytics Business Intelligence dashboard suite that supports advanced algorithmic attribution modelling and anomaly detection for Ads spending optimization.
Challenges met: To thoroughly monitor e-commerce fused with web behavioural data along the full customer journey across all touchpoints and perform impactful marketing campaigns.
Technologies used: Apache NiFi, MongoDB, Postgresql, Python, Pandas, Tableau Online.
Client's name and type: PNOE (Start-Up)
Industry: Health Care / Fitness
Project Type: Streaming processing and analytics, on-the-fly ETL, scalability on-demand, Fast OLAP Database
Solutions developed: PNOE project required real-time processing, transformation and analytics of big multi-user streaming biometric data. The aim is to provide real-time analytics and demanding algorithmic calculations on several health/fitness metrics displayed on PNOE mobile app. Such requirements cannot be covered with old fashioned data warehousing, i.e. you collect all data and perform the aggregations and data transformations overnight. Our solution stack was designed to allow on-the-fly, true stemming data processing through Kafka and Flink and to seamlessly support scaling, both in terms of the evolving app complexity and the growing user base via Kubernetes orchestration in AWS.
Challenges met: The task involved both real-time streaming data as well as out-off time, retrospective streams. This is a challenging situation for and Flink's functionality had to be extended to support both data modalities in a common pipeline. Also, the ETL has been fine-tuned to prevent the memory/processing load from growing out of proportion.
Technologies used: Apache Kafka; Apache Flink; Scala; Kubernetes.
Client's name and type: Alpha Marine (Marine Consultancy Firm)
Industry: Maritime
Project: Visualization engine for the company's Deep Dive Project and causality/regression analysis.
Tools developed: We developed bespoke visual analytics and data science studies for the Alpha Marine's Deep Dive Projection Shipping Companies' Culture. A major deliverable was the Deep Dive data Parser, which takes care of the data retrieval stage. Special care has been given for the Deep Dive data Parser to tackle all those inconsistencies among the surveys in the most generalizable way. The static visual analytics approach followed, where an extensive set of diverging infographics have been designed and programmed in Python to visualize the information and get insights inherent in the data related to the data produced through the Deep Dive surveys. We also provided automated trend analysis and sparse regression analysis tools and approaches.
Challenges met: Before any analysis, the data must be retrieved, prepared, and homogenized in a shape that is liable for analytics In the Deep Dive case. This poses particular challenges, mainly due to the multimodality of survey response tools (both survey monkey and custom Excel worksheet).
Technologies used: Python, Pandas, Matplotlib, Scikit, Tableau