UBITECH participates at the virtual kick-off meeting hosted by MAGGIOLI (October 14-15, 2020) of the CYRENE Research and Innovation Action, officially started on October 1st, 2020. The project is funded by European Commission under Horizon 2020 Programme (Grant Agreement No. 952690) and spans on the period October 2020 – September 2023. The vision of the CYRENE project is to enhance the security, privacy, resilience, accountability and trustworthiness of Supply Chains (SCs) through the provision of a novel and dynamic Conformity Assessment Process (CAP) that evaluates the security and resilience of supply chain services, the interconnected IT infrastructures composing these services and the individual devices that support the operations of the SCs. In order to meet its objective, the proposed Conformity Assessment Process is based on a collaborative, multi-level evidence -driven, Risk and Privacy Assessment Approach that support, at different levels, the SCs security officers and operators to recognize, identify, model, and dynamically analyse advanced persistent threats and vulnerabilities as well as to handle daily cyber-security and privacy risks and data breaches.
Within CYRENE, UBITECH undertakes the overall technical management ensuring the correct performance of the project’s technical tasks. Additionally, UBITECH drives the definition of the strategy of realizing the CYRENE Conformity Assessment process as well as the CYRENE multi-level evidence-driven Supply Chain Risk Assessment process. In particular, UBITECH is responsible for the design and development of the three horizontal layers (HLs) dedicated to Risk and Privacy Assessment: “HL1-Risk and Privacy Assessment of Supply Chains”, “HL2-Risk and Privacy Assessment of ICT-based Supply Chains” & “HL3-Risk and Privacy Assessment of CIIs’ IT ecosystems including IoT ecosystems, devices and ICT Systems”. Finally, UBITECH develops a highly configurable crawling service that facilitates crawl control, by either crawling all the data and information, process and post-process them to be ready for shallow and deep analytics or by performing “focused crawling” processes where given a concept, the services explore relevant parts of the Web and collects the required threats and risk-related data. Analytics libraries and algorithms (e.g. deep semantic analysis techniques together with NLP techniques) for knowledge extraction and business intelligence will be also used in order to extract meaningful knowledge that will be in a position to identify situations that can become a threat for the SCs under examination.