Acquire skills to design and maintain
Massive data systems!

Eligible CPF and multi-financing up to 100%

To be recalled Access to the programme

Approach 3P

Ready to take off
Full immersion
Ready to perform

Our training centre guides you in identifying the ideal training, helping you maximize funding opportunities.
We put all the keys in hand for a start with confidence.

Experience an immersive and intensive training experience, designed to dive into practical workshops and real case studies.
Learn by doing, and develop concrete skills directly applicable to your future projects.

At the end of your career, we evaluate your acquired skills, issue certification attesting to your expertise, and accompany you to ensure your success in your professional projects.
You are now ready to excel!

Description of the training

This training teaches the skills needed to design, develop and maintain large-scale data processing systems. You will learn how to work with tools like Hadoop, Spark, and ETL, and how to manage the data pipeline architecture.


Objectives of training

At the end of this training, you will be able to:

  • Understanding the key concepts of data engineering.
  • Design and deploy robust data pipelines.
  • Use tools like Apache Spark and Hadoop for Data Engineering.
  • Optimize massive data infrastructures.
  • Ensure data security and governance.

Who is this training for?

This training is aimed at:

  • Developers and software engineers wishing to design and deploy data pipelines.
  • Data Architects to Design Scalable Big Data Infrastructures.
  • Database administrators to effectively manage massive data.
  • Data engineers, Data Analysts and Data Scientists looking for a better understanding of data pipelines.
  • IT project managers supervising Data Engineering projects.

Prerequisites

Basic knowledge of information systems.

Training programme

Introduction to data engineering and pipeline design

  • Presentation of data engineering databases and data pipelines.
  • Design of data pipelines with Apache Spark and Hadoop.
Large-scale data processing and management
  • Use of NoSQL databases for mass data management.
  • Pipeline orchestration and data flow automation.
Security, governance and optimization of data systems
  • Ensure data security and compliance: GDPR, access management.
  • Optimizing the performance of data processing and storage systems.

Training assets

Alternative between theory and practice.
Trainers experienced in Data Engineering.
Access to modern tools and platforms.
Training adapted to all, with accessible prerequisites.

Pedagogical methods and tools used

Live demonstrations on Big Data services.
Real case studies and practical group work.
Massive data management simulations.
Feedback on the challenges faced in real projects.

Evaluation

The evaluation shall be carried out by:

  • QCM to test the understanding of concepts.
  • Practical case studies and group discussions.
  • Ongoing evaluation during practical sessions.

Normative References

ISO/IEC 27001:Information security management.
GDPR:General Regulation on the protection of personal data (EU).
ISO 22301:Management of business continuity.
SOC 2:Criteria for security, availability and confidentiality in cloud services.

Modalities

Inter-company or remote
Intra-enterprise

Inter-company or remote

Duration:3 days

Price:€4000

More details Contact us

Intra-enterprise

Duration and program can be customized according to your company's specific needs

More details Contact us
💬
FAQ Assistant