Technics Publications

Data Modeling Zone (DMZ) returns to Phoenix! March 4-6, 2025.

Applications deliver value only when the underlying applications meets user needs. Yet organizations spend millions of dollars and thousands of hours every year developing solutions that fail to deliver. There is so much waste due to poorly capturing and articulating business requirements. Data models prevent this waste by capturing business terminology and needs in a precise form and at varying levels of detail, ensuring fluid communication across business and IT. Data modeling is therefore an essential skill for anyone involved in building an application: from data scientists and business analysts to software developers and database administrators. Data modeling is all about understanding the data used within our operational and analytics processes, documenting this knowledge in a precise form called the “data model”, and then validating this knowledge through communication with both business and IT stakeholders.

DMZ is the only conference completely dedicated to data modeling. DMZ US 2025 will contain five tracks:

  • Skills (fundamental and advanced modeling techniques)
  • Technologies (AI, mesh, cloud, lakehouse, modeling tools, and more)
  • Patterns (reusable modeling and architectural structures)
  • Growth (communication and time management techniques)
  • Semantics (graphs, ontologies, taxonomies, and more)

The DMZ US 2025 Program
(Call for Speakers Still Open!)

Pre-conference Workshops

Skills

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

Assuming no prior knowledge of data modeling, we start off with an exercise that will illustrate why data models are essential to understanding business processes and business requirements. Next, we will explain data modeling concepts and terminology, and provide you with a set of questions you can ask to quickly and precisely identify entities (including both weak and strong entities), data elements (including keys), and relationships (including subtyping). We will discuss the three different levels of modeling (conceptual, logical, and physical), and for each explain both relational and dimensional mindsets.

Steve Hoberman’s first word was “data”. He has been a data modeler for over 30 years, and thousands of business and data professionals have completed his Data Modeling Master Class. Steve is the author of 11 books on data modeling, including The Align > Refine > Design Series and Data Modeling Made Simple. Steve is also the author of Blockchainopoly. One of Steve’s frequent data modeling consulting assignments is to review data models using his Data Model Scorecard® technique. He is the founder of the Design Challenges group, creator of the Data Modeling Institute’s Data Modeling Certification exam, Conference Chair of the Data Modeling Zone conferences, director of Technics Publications, lecturer at Columbia University, and recipient of the Data Administration Management Association (DAMA) International Professional Achievement Award.

Advanced Data Modeling Session coming soon!

communication

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

Change is no longer the status quo, disruption is.  Over the past five years major disruptions have happened in all our lives that have left some of us reeling while others stand tall egging on more.  All people approach disruption differently.  Some seem to adjust, and quickly looked for ways to optimize or create efficiencies for the upcoming change.  Other dig their heels in, question everything and insist on all the answers, in detail, right away.  Then there are the ones that are ready and willing to take on disruption.  In this workshop you will find out which profile best suits you.  How that applies to big organizational efforts like data governance, AI and the impact on data management and data modeling.  Finally, how you can bridge the divide between these profiles to harness the disruption and calm the chaos.

  • Disruption Research
  • What is the sustainable disruption model?
  • Are you a Disrupter, Optimizer or Keeper:  Take the Quiz
  • Working with others 
  • Three take-aways

Laura Madsen is a global data strategist, keynote speaker, and author.  She advises data leaders in healthcare, government, manufacturing, and tech.  Laura has spoken at hundreds of conferences and events inspiring organizations and individuals alike with her iconoclastic disrupter mentality.  Laura is a co-founder and partner in a Minneapolis-based consulting firm Moxy Analytics to converge two of her biggest passions: helping companies define and execute successful data strategies and radically challenging the status quo.

Storytelling is a time-honored human strategy for communicating effectively, building community, and creating networks of trust. In the first half of this 3-hour workshop you will learn accessible and memorable tools for crafting and telling stories about yourself, your experiences, and values. In the second half, you will apply what you learn to your professional contexts, including how to use stories and storytelling to:

  • Explain the value of data modeling
  • Validate a data model
  • Interpret analytics

Liz Warren, a fourth-generation Arizonan, is the faculty director and one of the founders of the South Mountain Community College Storytelling Institute in Phoenix, Arizona. Her textbook, The Oral Tradition Today: An Introduction to the Art of Storytelling is used at colleges around the nation. Her recorded version of The Story of the Grail received a Parents’ Choice Recommended Award and a Storytelling World Award. The Arizona Humanities Council awarded her the Dan Schilling Award as the 2018 Humanities Public Scholar. In 2019, the American Association of Community Colleges awarded her the Dale Parnell Distinguished Faculty Award. Recent work includes storytelling curricula for the University of Phoenix and the Nature Conservancy, online webinars for college faculty and staff around the country, events for the Heard Museum, the Phoenix Art Museum, the Desert Botanical Garden, and the Children’s Museum of Phoenix, and in-person workshops for Senator Mark Kelly’s staff and Governor Ducey’s cabinet.

Dr. Travis May has been a part of South Mountain Community College for 23 years and is currently the Interim Dean of Academic Innovation. His role entails providing leadership for South Mountain Community College (SMCC) Construction Trades Institute, advancing faculty’s work with Fields of Interest, and strengthening our localized workforce initiatives. Dr. May holds a Bachelor of Arts in Anthropology from Arizona State University, Master of Education in Educational Leadership and a Doctor of Education in Organizational Leadership and Development from Grand Canyon University.Dr. May has immersed himself as a Storytelling faculty member for the past nine years and is highly regarded as an instructor. He approaches his classes with student success in mind and knowing that he is also altering the course of his students’ lives. His classes are fun, engaging, and challenging and he encourages students to embrace their inner voice so they can share their stories with the world.

certification

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

Unlock the potential of your data management career with the Certified Data Management Professional (CDMP) program by DAMA International. As the global leader in Data Management, DAMA empowers professionals like you to acquire the skills, knowledge, and recognition necessary to thrive in today’s data-driven world. Whether you’re a seasoned data professional or an aspiring Data Management expert, the CDMP certification sets you apart, validating your expertise and opening doors to new career opportunities.

CDMP is recognized worldwide as the gold standard for Data Management professionals. Employers around the globe trust and seek out CDMP-certified individuals, making it an essential credential for career advancement.

All CDMP certification levels require approving the Data Management Fundamental exam. This workshop is aimed at letting you know what to expect when taking the exam and how to define your best strategy to answer it. It is not intended to teach you Data Management but introduce you to CDMP and to do a brief review of the most relevant topics to keep in mind. After our break for lunch, you will have the opportunity to take the exam in its modality of PIYP (Pay If You Pass)!

Through the first part of this workshop (9:00-12:30), you will get:

  • Understanding of how CDMP works, what type of questions to expect, and best practices when responding to the exam.
  • A summary of the most relevant topics of Data Management according to the DMBoK 2nd Edition
  • A series of recommendations for you to define your own strategy on how to face the exam to get the best score possible
  • A chance to answer the practice exam to test your strategy

 

Topics covered:

  1. Introduction to CDMP
  2. Overview and summary of the most relevant points of DMBoK Knowledge Areas:
    1. Data Management
    2. Data Handling Ethics
    3. Data Governance
    4. Data Architecture
    5. Data Modeling
    6. Data Storage and Operations
    7. Data Integration
    8. Data Security
    9. Document and Content Management
    10. Master and Reference Data
    11. Data Warehousing and BI
    12. Metadata Management
    13. Data Quality

 

3. Analysis of sample questions

We will break for lunch and come back full of energy to take the CDMP exam in the modality of PIYP (Pay if you Pass), which is a great opportunity.

 

Those registered to this workshop will get an Event CODE to purchase the CDMP exam with no charge before taking the exam. The Event CODE will be emailed along with instructions to enroll in the exam. Once this is done, access to the Practice Exam is available, and strongly recommended to execute it as many times as possible before the exam.

 

Considerations:

  • PIYP means that if you approve the exam (all exams are approved by getting 60% of right answers) you must pay for it (US$300.00) before leaving the room, so be ready with your credit card. If you are expecting a score equal or above 70 and you get 69, you still must pay the exam.
  • You must bring your own personal device (laptop or tablet, not mobile phone), with Chrome navigator.
  • Job laptops are not recommended as they might have firewalls that will not allow you to enter the exam platform.
  • If English is not your main language you must enroll in the exam as ESL (English as a Second Language), and you may wish to install a translator as Chrome extension.
  • Data Governance and Data Quality specialty exams will also be available

 

If you are interested in taking this workshop, please complete this form to receive your Event CODE and to secure a spot to take the exam.

Semantics

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

Learn how to model an RDF Graph (Resource Description Framework), the underpinnings of a true inference capable knowledge graph.

We will cover the fundamentals of RDF Graph Data Models using the Financial Industry Business Ontology (FIBO), and see how to create a domain-specific graph model on top of the FIBO Ontology. Learn how to:

  • Build an RDF graph
  • Data model with RDF
  • Validate data with SHACL (Shape Constraint Language)
  • Query RDF graphs with SPARQL
  • Integrate LLMs with a graph model 

Sumit Pal is an Ex-Gartner VP Analyst in Data Management & Analytics space. Sumit has more than 30 years of experience in the data and Software Industry in various roles spanning companies from startups to enterprise organizations in building, managing and guiding teams and building scalable software systems across the stack from middle tier, data layer, analytics and UI using Big Data, NoSQL, DB Internals, Data
Warehousing, Data Modeling, Data Science and middle tier. He is also a published author of a book on SQLEngines and developed a MOOC course on Big Data.

With the unpredictable trajectory of AI over the near future, ranging from a mild AI winter to AI potentially getting out of our control, integrating AI into the enterprise should be spearheaded by highly curated BI systems led by the expertise of human BI analysts and subject matter experts. In this session, we will explore how these two facets of intelligence can synergize to transform businesses into dynamic, highly adaptive entities, in a way that is both cautious and enhances competitive capability.

Based on concepts from the book “Enterprise Intelligence,” attendees will gain an in-depth understanding of building a resilient enterprise by integrating BI structures into an Enterprise Knowledge Graph (EKG). The key topic is the integration of the roles of Knowledge Graphs, Data Catalogs, and BI-derived structures like the Insight Space Graph (ISG) and Tuple Correlation Web (TCW). The session will also emphasize the importance of data mesh methodology in enabling the seamless onboarding of more BI sources, ensuring robust data governance and metadata management.

Key Takeaways for Attendees:

  1. Drive Safe AI Integration: Discover how to use highly curated BI data to enhance the accuracy and depth of your analyses and understand the potential of combining BI with AI for more insightful and predictive analytics.
  2. Architectural Frameworks and Data Mesh: Learn strategies to integrate diverse data sources into a cohesive Enterprise Knowledge Graph using data mesh methodology, ensuring robust data governance and metadata management.
  3. Create a Resilient Enterprise: Gain insights into creating an intelligent enterprise capable of making innovative decisions by harnessing the synergy between BI and AI. Understand how to build and maintain a robust data infrastructure that drives organizational success.

Explore the transformative potential of Enterprise Intelligence and equip your organization with the tools and knowledge to navigate and excel in the complexities of the modern world.

Eugene Asahara, with a rich history of over 40 years in software development, including 25 years focused on business intelligence, particularly SQL Server Analysis Services (SSAS), is currently working as a Principal Solutions Architect at Kyvos Insights. His exploration of knowledge graphs began in 2005 when he developed Soft-Coded Logic (SCL), a .NET Prolog interpreter designed to modernize Prolog for a data-distributed world. Later in 2012, Eugene ventured into creating Map Rock, an project aimed at constructing knowledge graphs that merge human and machine intelligence across numerous SSAS cubes. While these initiatives didn’t gain extensive adoption at the time, the lessons learned have proven invaluable. With the emergence of Large Language Models (LLMs), building and maintaining knowledge graphs has become practically achievable, and Eugene is leveraging his past experience and insights from SCL and Map Rock to this end. He resides in Eagle, Idaho, with his wife, Laurie, a celebrated watercolorist known for her award-winning work in the state, and their two cats,

The Main Event

Skills

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

Restore a balance between a code-first approach which results in poor data quality and unproductive rework, and too much data modeling which gets in the way of getting things done

You will learn: Inspired by Eric Ries’ popular book “Domain-Driven Design” published in 2003 for software development, the principles have been applied and adapted to data modeling, resulting in a pragmatic approach to designing data structures at the initial phase of metadata management.

In this session, you will learn how to strike a balance between: not too much data modeling that may have been in the way of getting things done, and not enough data modeling which often results in suboptimal applications and poor data quality, one of the causes of AI “hallucinations. 

You will also learn how to tackle complexity at the heart of data, and how to reconcile Business and IT through a shared understanding of the context and meaning of data.

Pascal Desmarets is the founder and CEO of Hackolade, a data modeling tool for NoSQL databases, storage formats, REST APIs, and JSON in RDBMS. Hackolade pioneered Polyglot Data Modeling, which is data modeling for polyglot data persistence and data exchanges. With Hackolade’s Metadata-as-Code strategy, data models are co-located with application code in Git repositories as they evolve and are published to business-facing data catalogs to ensure a shared understanding of the meaning and context of your data. Pascal is also an advocate of Domain-Driven Data Modeling.

When developing a data analytics platform, how do we bring together business requirements on the one hand with specific data structures to be created on the other? Model-Based Business Analysis (MBBA) as a process model that is embedded in an agile context can make a valuable contribution here. MBBA is based on specified use cases, runs through several phases, and generates many artifacts relevant to development activities – in particular conceptual data models that serve as templates for EDW implementations. The MBBA process concludes with concrete logical data structures for an access layer, usually star schema definitions. The participants will get an overview of all phases, objectives and deliverables of the MBBA process.

  • You will learn how conceptual data models help with requirements analysis for data analytics platforms.
  • You will learn which steps and deliverables are necessary for such an analysis and what role data governance plays here.
  • You will learn how all these steps and models can be combined in a holistic process to provide significant added value for development.

Peer M. Carlson is Principal Consultant at b.telligent (Germany) with extensive experience in the field of Business Intelligence & Data Analytics. He is particularly interested in data architecture, data modeling, Data Vault, business analysis, and agile methodologies. As a dedicated proponent of conceptual modeling and design, Peer places great emphasis on helping both business and technical individuals enhance their understanding of overall project requirements. He holds a degree in Computer Science and is certified as a “BI Expert” by TDWI Europe.

In today’s data-driven world, organizations face the challenge of managing vast amounts of data efficiently and effectively. Traditional data warehousing approaches often fall short in addressing issues related to scalability, flexibility, and the ever-changing nature of business requirements. This is where Data Vault, a data modeling methodology designed specifically for data warehousing, comes into play.

In this introductory session, we will explore the fundamentals of Data Vault, a contemporary approach that simplifies the process of capturing, storing, and integrating data from diverse sources.

Participants will gain a comprehensive understanding of the key concepts and components of Data Vault, including hubs, links, and satellites. We will discuss how it promotes scalability, auditability, and adaptability, making it an ideal choice for organizations looking to future-proof their data solutions.

By the end of this session, beginners will have a solid foundation in Data Vault and be equipped with the knowledge to start their journey towards mastering this contemporary data warehousing technique.

Join us to discover how Data Vault can change your approach to data management and unlock the full potential of your organization’s data assets.

Dirk Lerner

Dirk Lerner is an experienced independent consultant and managing director of TEDAMOH. He is considered a global expert on BI architectures, data modeling and temporal data. Dirk advocates flexible, lean, and easily extendable data warehouse architectures.

Through the TEDAMOH Academy, Dirk coaches and trains in the areas of temporal data, data modeling certification, data modeling in general, and on Data Vault in particular.

As a pioneer for Data Vault and FCO-IM in Germany he wrote various publications, is a highly acclaimed international speaker at conferences and author of the TEDAMOH-blog and Co-Author of the book ‘Data Engine Thinking’.

Technologies

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

The Align > Refine > Design approach covers conceptual, logical, and physical data modeling (schema design and patterns), combining proven data modeling practices with database-specific features to produce better applications. Learn how to apply this approach when creating a DynamoDB schema. Align is about agreeing on the common business vocabulary so everyone is aligned on terminology and general initiative scope. Refine is about capturing the business requirements. That is, refining our knowledge of the initiative to focus on what is essential. Design, is about the technical requirements. That is, designing to accommodate DynamoDB’s powerful features and functions.

You will learn how to design effective and robust data models for DynamoDB.

Pascal Desmarets is the founder and CEO of Hackolade, a data modeling tool for NoSQL databases, storage formats, REST APIs, and JSON in RDBMS. Hackolade pioneered Polyglot Data Modeling, which is data modeling for polyglot data persistence and data exchanges. With Hackolade’s Metadata-as-Code strategy, data models are co-located with application code in Git repositories as they evolve and are published to business-facing data catalogs to ensure a shared understanding of the meaning and context of your data. Pascal is also an advocate of Domain-Driven Data Modeling.

The Align > Refine > Design approach covers conceptual, logical, and physical data modeling (schema design and patterns), combining proven data modeling practices with database-specific features to produce better applications. Learn how to apply this approach when creating a Elasticsearch schema. Align is about agreeing on the common business vocabulary so everyone is aligned on terminology and general initiative scope. Refine is about capturing the business requirements. That is, refining our knowledge of the initiative to focus on what is essential. Design, is about the technical requirements. That is, designing to accommodate Elasticsearch’s powerful features and functions.

You will learn how to design effective and robust data models for Elasticsearch.

Rafid is a data modeler who entered the field at the young age of 22, holding an undergraduate degree in Biology and Mathematics from the University of Ottawa. He was inducted into the DMC Hall of Fame by the Data Modeling Institute in July 2020, making him the first Canadian and 10th person worldwide to receive this honor. Rafid possesses extensive experience in creating standardized financial data models and utilizing various modeling techniques to enhance data delivery mechanisms. He is well-versed in data analytics, having conducted in-depth analyses of Capital Markets, Retail Banking, and Insurance data using both relational and NoSQL models. As a speaker, Rafid shared his expertise at the 2021 Data Modeling Zone Europe conference, focusing on the reverse engineering of physical NoSQL data models into logical ones. Rafid and his team recently placed second in an annual AI-Hackathon, focusing on a credit card fraud detection problem. Alongside his professional pursuits, Rafid loves recording music and creating digital art, showcasing his creative mind and passion for innovation in data modeling.

Fully Communication Oriented Information Modeling (FCOIM) is a groundbreaking approach that empowers organizations to communicate with unparalleled precision and elevate their data modeling efforts. FCOIM leverages natural language to facilitate clear, efficient, and accurate communication between stakeholders, ensuring a seamless data modeling process. With the ability to generate artifacts such as JSON, SQL, and DataVault, FCOIM enables data professionals to create robust and integrated data solutions, aligning perfectly with the project’s requirements.

You will learn:

  • The fundamentals of FCOIM and its role in enhancing communication within data modeling processes.
  • How natural language modeling revolutionizes data-related discussions, fostering collaboration and understanding.
  • Practical techniques to generate JSON, SQL, and DataVault artifacts from FCOIM models, streamlining data integration and analysis.

Get ready to be inspired by Marco Wobben, a seasoned software developer with over three decades of experience! Marco’s journey in software development began in the late 80s, and since then, he has crafted an impressive array of applications, ranging from bridge automation, cash flow and decision support tools, to web solutions and everything in between.

As the director of BCP Software, Marco’s expertise shines through in his experience in developing off-the-shelf end products, automate Data Warehouses, and create user-friendly applications. But that’s not all! Since 2001, he has been the driving force behind CaseTalk, the go-to CASE tool for fact-oriented information modeling.

Join us as we delve into the fascinating world of data and information modeling alongside Marco Wobben. Discover how his passion and innovation have led to the support of Fully Communication Oriented Information Modeling (FCO-IM), a game-changing approach used in institutions worldwide. Prepare to be captivated by his insights and experience as we explore the future of data modeling together!

The data mesh paradigm brings a transformative approach to data management, emphasizing domain-oriented decentralized data ownership, data as a product, self-serve infrastructure, and federated computational governance. AI can play a crucial role in architectural alignment. We’ll cover modeling a robust data mesh covering:

  1. Domain-Centric Model
  2. Data Product Interfaces
  3. Schema Evolution and Versioning
  4. Metadata and Taxonomy
  5. Development
    Quality and Observability
  6. Self-Service Infrastructure
  7. Governance & Compliance

 

You will learn:
1. Modeling to leverage Data Mesh design’s value in creating accountable, interpretable, and explainable end-user consumption
2. Bridging between Data Fabric and Data Mesh architectures
3. Increasing end-user efficacy of AI and Analytics initiatives with a mesh model

Anshuman Sindhar is a seasoned enterprise data architect, process automation, analytics and AI/ML practitioner with expertise in building and managing data solutions in the domains of finance, risk management, customer management, and regulatory compliance.

During his career spanning 25+ years, Anshuman has been a professional services leader, adept at selling and managing multi-million dollar complex data integration projects from start to finish with major system integrator firms including KPMG, BearingPoint, IBM, Capco, Paradigm Technologies, and Quant16 to achieve customer’s digital transformation objectives in a fast-paced highly collaborative consulting environment. He currently works as an independent data architect.

The future of enterprise systems lies in their ability to inherently support data exchange and aggregation, crucial for advanced reporting and AI-driven insights. This presentation will explore the development and implementation of a groundbreaking platform designed to achieve these goals.

We aim to demonstrate how our 3D integration approach—encompassing data, time, and systems—facilitates secure data exchange across supply chains and complex conglomerates, such as government entities, which require the coordination of multiple interconnected systems. Our vision leverages AI to automatically analyze existing systems and generate sophisticated 3D systems. These systems, when networked together, enable structured and automated data exchange and support the seamless creation of data warehouses.

Central to our open-source platform is a procedural methodology for system creation. By utilizing core data models and a high-performance primary key structure, our solution ensures that primary keys remain consistent across various systems, irrespective of data transfers.

Join us as we delve into the unique features of our platform, highlighting its capabilities in data integration, AI application, and system migration. Discover how our approach can revolutionize enterprise systems, paving the way for enhanced data sharing and operational efficiency.

Blair Kjenner has been architecting and developing enterprise software for over forty years.  Recently, he had the opportunity to reverse engineer many different systems for an organization to help them find missed revenues.  The project resulted in recovering millions of uncollected dollars.  This inspired Blair to evaluate how systems get created and why we struggle with integration. Blair then formulated a new methodology for developing enterprise systems specifically to deal with the software development industry’s key issue in delivering fully integrated systems to organizations at a reasonable cost. Blair is passionate about contributing to an industry he has enjoyed so much.

Kewal Dhariwal is a dedicated researcher and developer committed to advancing the information technology industry through education, training, and certifications.  Kewal has built many standalone and enterprise systems in the UK, Canada, and the United States and understands how we approach enterprise software today and the issues we face. He has worked closely with and presented with the leading thinkers in our industry, including John A. Zachman (Zachman Enterprise Framework), Peter Aiken (CDO, Data Strategy, Data Literacy), Bill Inmon (data warehousing to data lakehouse), and Len Silverston (Universal Data Models).  Kewal is committed to advancing our industry by continually looking for new ways to improve our approach to systems development, data management, machine learning, and AI.  Kewal was instrumental in creating this book because he immediately realized this approach was different due to his expansive knowledge of the industry and by engaging his broad network of experts to weigh in on the topic and affirm his perspective.

Case Studies

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

The integration of Generative AI into marketing data modeling marks a transformative step in understanding and engaging customers through data-driven strategies. This presentation will explore the definition and capabilities of Generative AI, such as predictive analytics and synthetic data generation. We will highlight specific applications in marketing, including personalized customer interactions, automated content creation, and enhanced real-time decision-making in digital advertising.

Technical discussions will delve into the methodologies for integrating Generative AI with existing marketing systems, emphasizing neural networks and Transformer models. These technologies enable sophisticated behavioral predictions and necessitate robust data infrastructures like cloud computing. Additionally, we will address the ethical implications of Generative AI, focusing on the importance of mitigating biases and maintaining transparency to uphold consumer trust and regulatory compliance.

Looking forward, we will explore future trends in Generative AI that are poised to redefine marketing strategies further, such as AI-driven dynamic pricing and emotional AI for deeper consumer insights. The speech will conclude with strategic recommendations for marketers on how to leverage Generative AI effectively, ensuring they remain at the cutting edge of technological innovation and competitive differentiation.

 You will:

  • Understand the Role of Generative AI in Marketing Data Modeling in Personalization and Efficiency: Learn how Generative AI can be utilized to analyze consumer data, enhance targeting accuracy, automate content creation, and improve overall marketing efficiency through personalized customer interactions.
  • Recognize the Importance of Ethical Practices in AI Implementation of Data Modeling and Execution: Identify the ethical considerations necessary when integrating AI into marketing strategies, including how to address data privacy, avoid algorithmic bias, and maintain transparency to ensure responsible use of AI technologies.
  • Anticipate and Adapt to Future AI Trends in Marketing: Acquire insights into emerging AI trends such as dynamic pricing and emotional AI, and develop strategies to incorporate these advancements into marketing practices to stay ahead in a rapidly evolving digital landscape.

Dr. Kyle Allison, renowned as The Doctor of Digital Strategy, brings a wealth of expertise from both the industry and academia in the areas of e-commerce, business strategy, operations, digital analytics, and digital marketing. With a remarkable track record spanning more than two decades, during which he ascended to C-level positions, his professional journey has encompassed pivotal roles at distinguished retail and brand organizations such as Best Buy, Dick’s Sporting Goods, Dickies, and the Army and Air Force Exchange Service.

Throughout his career, Dr. Allison has consistently led the way in developing and implementing innovative strategies in the realms of digital marketing and e-commerce. These strategies are firmly rooted in his unwavering dedication to data-driven insights and a commitment to achieving strategic excellence. His extensive professional background encompasses a wide array of business sectors, including the private, public, and government sectors, where he has expertly guided digital teams in executing these strategies. Dr. Allison’s deep understanding of diverse business models, coupled with his proficiency in navigating B2B, B2C, and DTC digital channels, and his mastery of both the technical and creative aspects of digital marketing, all highlight his comprehensive approach to digital strategy.

In the academic arena, Dr. Allison has been a pivotal figure in shaping the future generation of professionals as a respected professor & mentor, imparting knowledge in fields ranging from digital marketing, analytics, and e-commerce to general marketing and business strategies at prestigious institutions nationwide, spanning both public and private universities and colleges. Beyond teaching, he has played a pivotal role in curriculum development, course creation, and mentorship of doctoral candidates as a DBA doctoral chair.

As an author, Dr. Allison has significantly enriched the literature in business strategy, analytics, digital marketing, and e-commerce through his published works in Quick Study Guides, textbooks, journal articles, and professional trade books. He skillfully combines academic theory with practical field knowledge, with a strong emphasis on real-world applicability and the attainment of educational objectives.

Dr. Allison’s educational background is as extensive as his professional accomplishments, holding a Doctor of Business Administration, an MBA, a Master of Science in Project Management, and a Bachelor’s degree in Communication Studies.

For the latest updates on Dr. Allison’s work and portfolio, please visit DoctorofDigitalStrategy.com.

Do you want to communicate better with consumers and creators of enterprise data?  Do you want to effectively communicate within IT application teams about the meaning of data?  Do you want to break down data silos and promote data understanding?  Conceptual data models are the answer!

 

Many companies skip over the creation of conceptual data models and go right to logical or physical modeling.  They miss reaping the benefits that conceptual models provide:

 

  • Facilitate effective communication about data between everyone who needs to understand basic data domains and definitions.
  • Break down data silos to provide enterprise-wide collaboration and consensus.
  • Present a high-level, application neutral overview of your data landscape.
  • Identify and mitigate data complexities.

 

You will learn what a conceptual model is, how to create one on the back of a napkin in 10 minutes or less, and how to use that to drive communication at many levels.

Kasi Anderson has been in the data world for close to 25 years serving in multiple roles including data architect, data modeler, data warehouse design and implementation, business intelligence evangelist, data governance specialist, and DBA.  She is passionate about bridging the gap between business and IT and working closely with business partners to achieve corporate goals through the effective use of data. She loves to examine data ecosystems and figure out how to extend architectures to meet new requirements and solve challenges. She has worked in many industries including manufacturing and distribution, banking, healthcare, and retail.

 In her free time, Kasi loves to read, travel, cook, and spend time with her family.  She enjoys hiking the beaches and mountains in the Pacific Northwest and loves to find new restaurants and wineries to enjoy.       

Laurel Sturges, a seasoned data professional, has been an integral part of the tech community helping businesses better understand and utilize data for over 40 years. She refers to problem solving as an adventure where she really finds passion in the process of discussing and defining data, getting into all the details including metadata, definitions, business rules, and everything that goes along with it.

 

Laurel is an expert in creating and delivering quality business data models and increasing communication between key business stakeholders and IT groups. She provides guidance for clients to make informed decisions so her partners can build a quality foundation for success.

 

She has a diverse background serving in a multitude of roles educating individuals as a peer and as an external advisor. She has served in many industries including manufacturing, aviation, and healthcare. Her specialization is relational data theory and usage of multiple modeling tools.

 

Outside of the data world, Laurel is learning to garden and loves to can jams and fresh fruits and veggies. Laurel is an active supporter of Special Olympics of Washington and has led her company’s Polar Plunge for Special Olympics team for 10 years, joyfully running into Puget Sound in February!

This presentation is aimed at anyone looking to make their data warehouse initiatives and products more agile and improve collaboration between teams and departments.

Flight Levels will be introduced—a lightweight and pragmatic approach to business agility. It helps organizations visualize and manage their work items across different operational levels to ensure effective collaboration and alignment with strategic goals.

The presentation explains the three Flight Levels:

  • Strategic Level: Focuses on executing the long-term goals and strategic initiatives to set the direction for the entire organization.
  • Coordination Level: Coordinates work between various teams and departments to manage interactions and dependencies.
  • Operational Level: Concentrates on daily work and project execution by operational teams.

 

By visualizing work, improving communication, and continuously optimizing collaboration, Flight Levels provide a hands-on approach to enhancing agility in any organization. Practical examples from Deutsche Telekom will illustrate how Flight Levels are successfully used to increase transparency, efficiency, and value creation.

KERSTIN LERNER is a Flight Levels Guide at Deutsche Telekom and an independent agile coach. She has more than 15 years of international experience in a variety of agile roles. Through her coaching and training and as a knowledge catalyst, she helps leaders, organizations and teams to be more successful in their business.

Semantics

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

In this session, we will explore advanced structures within knowledge graphs that go beyond traditional ontologies and taxonomies, offering deeper insights and novel applications for data modeling. As knowledge graphs gain prominence in various domains, their potential for representing complex relationships and dynamics becomes increasingly valuable. This presentation will delve into five special graph structures that extend the capabilities of standard knowledge graph frameworks:

  1. Trophic Cascades: Understanding ecological hierarchies and interactions, and how these concepts can be applied to model dependencies and influence within organizational data.
  2. Non-deterministic Finite-State Automata (NFA): Leveraging NFAs to model complex decision processes and workflows that capture the probabilistic nature of real-world operations.
  3. Performance Management Strategy Maps: Using strategy maps to visualize and align organizational objectives, facilitating better performance management through strategic relationships and causal links.
  4. Bayesian Belief Networks, Causal Diagrams, and Markov Blankets: Implementing Bayesian networks and causal diagrams to model probabilistic relationships and infer causality within data. Utilizing Markov blankets to isolate relevant variables for a particular node in a probabilistic graphical model, enabling efficient inference and robust decision-making by focusing on the local dependencies.
  5. Workflows: Structuring workflows within knowledge graphs to represent and optimize business processes, enabling more efficient and adaptive operations.

By examining these structures, attendees will gain insights into how knowledge graphs can be utilized to model and manage intricate systems, drive innovation in data modeling practices, and support advanced business intelligence frameworks. This session is particularly relevant for data modelers, BI professionals, and knowledge management experts who are interested in pushing the boundaries of traditional knowledge graphs.

Eugene Asahara, with a rich history of over 40 years in software development, including 25 years focused on business intelligence, particularly SQL Server Analysis Services (SSAS), is currently working as a Principal Solutions Architect at Kyvos Insights. His exploration of knowledge graphs began in 2005 when he developed Soft-Coded Logic (SCL), a .NET Prolog interpreter designed to modernize Prolog for a data-distributed world. Later in 2012, Eugene ventured into creating Map Rock, an project aimed at constructing knowledge graphs that merge human and machine intelligence across numerous SSAS cubes. While these initiatives didn’t gain extensive adoption at the time, the lessons learned have proven invaluable. With the emergence of Large Language Models (LLMs), building and maintaining knowledge graphs has become practically achievable, and Eugene is leveraging his past experience and insights from SCL and Map Rock to this end. He resides in Eagle, Idaho, with his wife, Laurie, a celebrated watercolorist known for her award-winning work in the state, and their two cats, Venus and Bodhi.

Knowledge Graphs (KGs) are all around us and we use them everyday. Many of the emerging data management products like data catalogs/fabric and MDM products leverage knowledge graphs as their engines. 

 A knowledge graph is not a one-off engineering project. Building a KG requires collaboration between functional domain experts, data engineers, data modelers, and key sponsors. It also combines technology, strategy, and organizational aspects (focusing only on technology leads to a high risk of failure). 

 KGs are effective tools for capturing and structuring a large amount of structured, unstructured, and semi-structured data. As such, KGs are becoming the backbone of many systems, including semantic search engines, recommendation systems, conversational bots, and data fabric.

 This session guides data and analytics professionals to show the value of knowledge graphs and how to build semantic applications.

Sumit Pal is an Ex-Gartner VP Analyst in Data Management & Analytics space. Sumit has more than 30 years of experience in the data and Software Industry in various roles spanning companies from startups to enterprise organizations in building, managing and guiding teams and building scalable software systems across the stack from middle tier, data layer, analytics and UI using Big Data, NoSQL, DB Internals, Data
Warehousing, Data Modeling, Data Science and middle tier. He is also a published author of a book on SQLEngines and developed a MOOC course on Big Data.

In the ever-evolving landscape of data management, capturing and leveraging business knowledge is paramount. Traditional data modelers have long excelled at designing logical models that meticulously structure data to reflect the intricacies of business operations. However, as businesses grow more interconnected and data-driven, there is a compelling need to transcend these traditional boundaries.

Join us for an illuminating session where we explore the intersection of logical data modeling and semantic data modeling, unveiling how these approaches can synergize to enhance the understanding and utilization of business knowledge. This talk is tailored for data modelers who seek to expand their expertise and harness the power of semantic technologies.

We’ll delve into:

  • The foundational principles of logical data modeling, emphasizing how it captures and stores essential business data.
  • The transformative role of semantic data modeling in building knowledge models that contextualize and interrelate business concepts.
  • Practical insights on integrating logical data models with semantic frameworks to create a comprehensive and dynamic knowledge ecosystem.
  • Real-world examples showcasing the benefits of semantic modeling in enhancing data interoperability, enriching business intelligence, and enabling advanced analytics.

Prepare to embark on a journey that not only reinforces the core strengths of your data modeling skills but also opens new avenues for applying semantic methodologies to capture deeper, more meaningful business insights. This session promises to be both informative and inspiring, equipping you with the knowledge to bridge the gap between traditional data structures and the next generation of business knowledge modeling.

Don’t miss this opportunity to stay ahead in the data modeling domain and transform the way you capture and utilize business knowledge.

Jeffrey Giles is the Principal Architect at Sandhill Consultants, with over 18 years of experience in information technology. A recognized professional in Data Management, Jeffrey has shared his knowledge as a guest lecturer on Enterprise Architecture at the Boston University School of Management.

Jeffrey’s experience in information management includes customizing Enterprise Architecture frameworks and developing model-driven solution architectures for business intelligence projects. His skills encompass analyzing business process workflows and data modeling at the conceptual, logical, and physical levels, as well as UML application modeling.

With an understanding of both transactional and data warehousing systems, Jeffrey focuses on aligning business, data, applications, and technology. He has contributed to designing Data Governance standards, data glossaries, and taxonomies. Jeffrey is a certified DCAM assessor and trainer, as well as a DAMA certified data management practitioner. He has also written articles for the TDAN newsletter. Jeffrey’s practical insights and approachable demeanor make him a valuable speaker on data management topics.

DMZ US 2025 call for speakers now open

If you are interested in speaking at DMZ US 2025, in Phoenix, Arizona, March 4-6, please complete the “Call for Speakers” form here, and we will respond shortly.

Sessions are 60 minutes along and each speaker will receive a complimentary conference pass to the full event.

Lock in the lowest prices today - Prices increase as tickets sell

Original price was: $1,995.00.Current price is: $1,299.00.

Why our current prices are so low

We keep costs low without sacrificing quality, and can pass on this savings with lower registration prices. Similar to the airline industry, however, as seats fill prices will rise. As people register for our event, the price of the tickets will go up. So, the least expensive ticket price will be today! If you would like to register your entire team and combine DMZ with a in-person team-building event, complete this form and we will contact you within 24 hours with discounted prices. If you are a student or work full-time for a not-for-profit organization, please complete this form and we will contact you within 24 hours with discounted prices. 

DMZ sponsors

Platinum

Gold

Silver

Location and hotels

Desert Ridge is just a 20 minute ride from the Phoenix Sky Harbor International Airport. There are many amazing hotels nearby, and the two below are next to each other, include complimentary buffet breakfast, and share the same free shuttle to the conference site which is two miles away. We are holding a small block of rooms at each hotel, and these discounted rates are half the cost of nearby hotels, so book early! 

The Sleep Inn

At the discounted rate of $163/night, the Sleep Inn is located at 16630 N. Scottsdale Road, Scottsdale, Arizona 85254. You can call the hotel directly at (480) 998-9211 and ask for the Data Modeling Zone (DMZ) rate, or book directly online by clicking here.

The Hampton Inn

At the discounted rate of $215/night, the Hampton Inn is located at 16620 North Scottsdale Road, Scottsdale, Arizona 85254. You can call the hotel directly at (480) 348-9280 and ask for the Data Modeling Zone (DMZ) rate, or book directly online by clicking here.

Pre-conference Workshops

Skills

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

Assuming no prior knowledge of data modeling, we start off with an exercise that will illustrate why data models are essential to understanding business processes and business requirements. Next, we will explain data modeling concepts and terminology, and provide you with a set of questions you can ask to quickly and precisely identify entities (including both weak and strong entities), data elements (including keys), and relationships (including subtyping). We will discuss the three different levels of modeling (conceptual, logical, and physical), and for each explain both relational and dimensional mindsets.

Steve Hoberman’s first word was “data”. He has been a data modeler for over 30 years, and thousands of business and data professionals have completed his Data Modeling Master Class. Steve is the author of 11 books on data modeling, including The Align > Refine > Design Series and Data Modeling Made Simple. Steve is also the author of Blockchainopoly. One of Steve’s frequent data modeling consulting assignments is to review data models using his Data Model Scorecard® technique. He is the founder of the Design Challenges group, creator of the Data Modeling Institute’s Data Modeling Certification exam, Conference Chair of the Data Modeling Zone conferences, director of Technics Publications, lecturer at Columbia University, and recipient of the Data Administration Management Association (DAMA) International Professional Achievement Award.

 From Conceptual Modeling to Graph Theory and 3NF data modeling, we are foundationally pursuing the modeling of objects/entities (nodes) and connections/relationships (edges). In addition, we are now seeing more of an emphasis on Semantic Integration and Business Concept Mapping. From a Data Vault and ELM (Ensemble Logical Modeling) perspective, this translates to focus on Core Business Concepts (CBCs) and Natural Business Relationships (NBRs). 

 In this workshop we will leverage several case examples to take a deep dive into the modeling of relationships (Unique and Specific NBRs). This includes considering Kinetic Binding variables (ties resulting from events and other actions) and Relationship State Transitions (capture of discrete event specifics versus the resulting status/end-state of the occurrence). 

 The goal of this workshop is to gain insights into what we are doing and why, consider the implications of our pattern decisions, be aware of our choices, and provide guidance moving forward with regard to changing the model in the future as the scope changes to include more specific details about discrete events.

Hans is president at Genesee Academy and a Principal at Top Of Minds AB. Data Warehousing, Business Intelligence and Big Data educator, author, speaker, and advisor. Currently working on Business Intelligence and Enterprise Data Warehousing (EDW) with a focus on Ensemble Modeling and Data Vault. Primarily in Stockholm, Amsterdam, Denver, Sydney and NYC.
Published data modeling book “Modeling the Agile Data Warehouse with Data Vault” which is available on Amazon websites in both print and Kindle e-reader versions.

Technologies

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

Most large enterprises struggle to adopt knowledge graphs and semantic models given years of investments in legacy mainframe, relational database and data warehousing technologies.  Because legacy data has been created and retained in source application schemas with zero semantic structure, the cost and complexity to convert to a universal-semantic data model is prohibitive, often hindering knowledge graph projects. How do you untrap 20+ years and petabytes of data and activate it to leverage the benefits of semantics and Knowledge Graphs?  

Join us in this working session where we will convert heterogeneous data sets into semantic golden records that adhere to an ontology. We’ll first walk through (1) advances in data formats, such as JSON Linked Data (JSON-LD), that make conversion of flat, relational data into serialized RDF viable at scale; and (2) advances in machine learning and AI in data classification to automate the linking of semantics to flat, structured data. We will work through all of the steps of data ingestion, classification, and remediation using Fluree Sense, an off-the-shelf AI pipeline trained to find, classify, and transform data into a given semantic ontology. 

 By the end of this session, we will have walked through a scenario using commercial tools available in the marketplace today to integrate legacy data from multiple traditional data stores onto a Knowledge Graph using real data, real ontologies, and real models.

This is a laptops-out session with a real demo environment — come ready to work and learn! 

Eliud Polanco is a seasoned data executive with extensive experience in leading global enterprise data transformation and management initiatives. Previous to his current role as President of Fluree, a data collaboration and transformation company, Eliud was formerly the Head of Analytics at Scotiabank, Global Head of Analytics and Big Data at HSBC, head of Anti-Financial Crime Technology Architecture for U.S.DeutscheBank, and Head of Data Innovation @ Citi. In his most recent role as Head of Analytics and Data Standards at Scotiabank, Eliud led a full-spectrum data transformation initiative to implement new tools and technology architecture strategies, both on-premises as well as on Cloud, for ingesting, analyzing, cleansing, and creating consumption ready data assets.

Large Language Models (LLMs) such as OpenAI’s ChatGPT, Google’s Bard, Meta’s LLaMA, and others are having a significant impact on the nature of work. This course is designed for IT professionals interested in understanding the principles, applications, and potentials of LLMs.

The course provides a comprehensive overview of LLM architecture, shedding light on how these models function. The focus then shifts to the nuanced art of crafting effective prompts, where participants will explore prompt engineering, guided by structured strategies and a proven framework. This section is aimed at enhancing your ability to integrate these tools into your workflow.

We will explore real-world applications of LLMs, emphasizing their adaptability across various scenarios. You will learn how to control ChatGPT and implement new methods to enhance LLMs with plugins, thereby increasing their capabilities to access current information and data.

We’ll also discuss the risks and ethical considerations associated with LLMs, ensuring an understanding of responsible use these technologies. The course concludes with a Q&A session, fostering a collaborative learning environment.

Tuned for IT professionals, this course serves as an informative guide to the future of IT through the lens of LLMs. Participants will gain practical insights and skills, equipping them to navigate the evolving technological landscape. Join us to deepen your understanding and engage with the transformative potential of Large Language Models.

Tom saw his first computer (actually a Teletype ASR-33 connected to one) in 1968 and it was love at first sight.  He has nearly five decades of experience in the field of computer science, focusing on AI, VLDBs and Business Intelligence. He co-founded and served as CEO of a profitable software consulting and managed services firm that grew to over 100 employees. Under his leadership, the company regularly won awards for culture and revenue growth. In 2015, Niccum led the successful sale of the company to database powerhouse Teradata, where he became a regional Senior Partner for consulting delivery and later transitioned to a field enablement role for Teradata’s emerging AI/ML solutions division. He is currently a Principal Consultant for Iseyon

Niccum earned his PhD in Computer Science from the University of Minnesota in 2000, and he also holds an M.S. and B. CompSci from the same institution. His academic achievements were recognized with fellowships such as the High-Performance Computing Graduate Fellowship and the University of Minnesota Graduate School Fellowship.

In recent years, Niccum has continued his professional development with courses and certifications in areas like deep learning, signifying his commitment to staying abreast of cutting-edge technologies. His blend of academic accomplishment, entrepreneurial success, and industry expertise make him a leading figure in the integration of technology and business strategies.

Growth

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

As we all have an expiration date, with myriad deadlines along the way, managing time is one of the most important skills we can acquire in life. Effectively managing time is closely related to managing stress and one’s own self-care.  Additionally time management is essential for achieving one’s goals. In this session we’ll establish an inventory of goals, skills, needs, and record these for analysis, prioritization, and setting a timeline for achievement.  Please bring paper and pen (paper can be a notebook, journal, or index cards) to sketch out your particular goals and timeline. While you may also do this on an electronic device, I will address the particular research of the advantages of using actual paper and pen.  We’ll also discuss avoiding traps such as procrastination and perfectionism. 

You’ll learn:

  • The power of the pen
  • Boundary Bounty
  • Ways to Prioritize Self-Care
  • Avoiding Procrastination

And you’ll get a jump start on your own time management.

Shari Collins received her doctorate in philosophy in 1994 from Washington University in St. Louis. She was awarded the Dean’s Dissertation Fellowship in her last year. Collins received her bachelor’s in sociology, with minors in political science, criminal justice, and philosophy from Colorado State University in 1983. She then briefly attended Northwestern School of Law from 1983-4, and received her Secondary Education Teaching Certification in social studies from Portland State University in 1987.

Collins is an associate professor of philosophy in the School of Humanities, Arts and Cultural Studies at Arizona State University West. Collins is included in the International Biographical Centre’s 2000 Outstanding Academics of the 21st Century. She is the co-editor of four editions of “Applied Ethics: A Multicultural Approach,” a bestseller now in its fifth edition, and the first text of its kind in the field of applied ethics; and editor of “Ethical Challenges to Business as Usual.”. Collins is also a co-editor of “Being Ethical: Classic and New Voices on Contemporary Issues,” with Broadview Press. Collins has published on environmental rights, her original idea of environmental labels, as well as in the area of environmental racism as it impacts American Indians, racial discrimination in the criminal justice system, environmental refugees, and on the ethics of anonymous sperm banks.
Collins teaches courses on business ethics, environmental ethics, applied ethics, and the philosophy of sex and love. She developed the first environmental ethics course at ASU West. At ASU West she has served as interim director of ethnic studies, and as the chair of the Department of Integrative Studies. She also was awarded the New College Outstanding Teaching Award for 2010-11. and the New College Outstanding Service Award for 2015-16 . Collins is also an artisan who sells her cards and journals at Practical Art in downtown Phoenix.

In this presentation, we will explore an innovative approach to enrich Enterprise Knowledge Graphs (KGs) using insights captured from Business Intelligence (BI) tools and Bayesian Networks sourced from high-concurrency OLAP cubes. Our methodology aims to make these insights readily accessible across large enterprises without causing information overload. We’ll examine how Large Language Models (LLMs) can facilitate the construction and utility of these KGs, and demonstrate how this integration enables advanced analytics methodologies.

Objectives

  • Showcase how large-scale graphs can serve as an enterprise-wide knowledge repository.
  • Discuss the acceleration of graph updates through high-concurrency OLAP cubes.
  • Explore the synergy between KGs and LLMs for improved data analytics.

Eugene Asahara, with a rich history of over 40 years in software development, including 25 years focused on business intelligence, particularly SQL Server Analysis Services (SSAS), is currently working as a Principal Solutions Architect at Kyvos Insights. His exploration of knowledge graphs began in 2005 when he developed Soft-Coded Logic (SCL), a .NET Prolog interpreter designed to modernize Prolog for a data-distributed world. Later in 2012, Eugene ventured into creating Map Rock, an project aimed at constructing knowledge graphs that merge human and machine intelligence across numerous SSAS cubes. While these initiatives didn’t gain extensive adoption at the time, the lessons learned have proven invaluable. With the emergence of Large Language Models (LLMs), building and maintaining knowledge graphs has become practically achievable, and Eugene is leveraging his past experience and insights from SCL and Map Rock to this end. He resides in Eagle, Idaho, with his wife, Laurie, a celebrated watercolorist known for her award-winning work in the state, and their two cats, Venus and Bodhi.

Semantics

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

Are data silos driving you mad? Do you wish it was easier to reuse, integrate and evolve your data models, applications and databases? Are you curious about ‘semantic technology’, ‘knowledge graphs’ or ‘ontologies’? If so, you have come to the right place.  We introduce and explain how semantic technology addresses several of the key drivers resulting in silos.

We give an informal introduction to ontology, which is a semantic model expressed in a logic-based representation that supports automated reasoning. In doing so we state what kinds of things an ontology must express. Then we introduce the ontology modeling language called OWL.  In the first session, you will learn about:

  • The benefits of semantic technology
  • RDF triples, triple stores and knowledge graphs
  • The key elements of OWL: individuals, classes, properties and literals

Michael Uschold has over thirty years’ experience in developing and transitioning semantic technology from academia to industry. He pioneered the field of ontology engineering, co-authoring the first paper and giving the first tutorial on the topic in 1995 in the UK. 

As a senior ontology consultant at Semantic Arts since 2010, Michael trains and guides clients to better understand and leverage semantic technology.  He has built commercial enterprise ontologies in finance, insurance, healthcare, commodities markets, consumer products, electrical device specifications, manufacturing, corporation registration, and data catalogs.  The ontologies are used to create knowledge graphs that drive production applications.  This experience provides the basis for his book:  Demystifying OWL for the Enterprise, published in 2018.

During 2008-2009, Uschold worked at Reinvent on a team that developed a semantic advertising platform that substantially increased revenue. As a research scientist at Boeing from 1997-2008 he defined, led, and participated in numerous projects applying semantic technology to enterprise challenges. He received his Ph.D. in AI from The University of Edinburgh in 1991 and an MSc. from Rutgers University in Computer Science in 1982.

This session builds upon the morning session. In this session you will learn about:

  • Fundamentals of logic, sets and inference
  • gist: An Upper Enterprise Ontology to kick start your ontology project
  • Integrating ontology and taxonomy
  • SPARQL and SHACL

At the end of both sessions, participants will have had a comprehensive introduction to semantic technology in general and semantic modeling in particular.

Michael Uschold has over thirty years’ experience in developing and transitioning semantic technology from academia to industry. He pioneered the field of ontology engineering, co-authoring the first paper and giving the first tutorial on the topic in 1995 in the UK. 

As a senior ontology consultant at Semantic Arts since 2010, Michael trains and guides clients to better understand and leverage semantic technology.  He has built commercial enterprise ontologies in finance, insurance, healthcare, commodities markets, consumer products, electrical device specifications, manufacturing, corporation registration, and data catalogs.  The ontologies are used to create knowledge graphs that drive production applications.  This experience provides the basis for his book:  Demystifying OWL for the Enterprise, published in 2018.

During 2008-2009, Uschold worked at Reinvent on a team that developed a semantic advertising platform that substantially increased revenue. As a research scientist at Boeing from 1997-2008 he defined, led, and participated in numerous projects applying semantic technology to enterprise challenges. He received his Ph.D. in AI from The University of Edinburgh in 1991 and an MSc. from Rutgers University in Computer Science in 1982.

The Main Event

Skills

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

Pascal Desmarets is the founder and CEO of Hackolade, a data modeling tool for NoSQL databases, storage formats, REST APIs, and JSON in RDBMS. Hackolade pioneered Polyglot Data Modeling, which is data modeling for polyglot data persistence and data exchanges. With Hackolade’s Metadata-as-Code strategy, data models are co-located with application code in Git repositories as they evolve and are published to business-facing data catalogs to ensure a shared understanding of the meaning and context of your data. Pascal is also an advocate of Domain-Driven Data Modeling.

Somewhere in the shuffle data modeling and data governance have become disconnected.  Too often data governance professionals either skip over the data architecture and data modeling functions altogether thinking it’s someone else’s job or haphazardly throw poorly documented requirements over a wall.  Sometimes this is because of organizational structure, sometimes it’s because of lack of understanding.  Make no mistake, a responsive and resilient data governance function still has data at its core and that requires a value-focused approach to data modeling. 

In this session we will cover:

  • The shift towards modern data governance
  • Closing the gap between governance and modeling
  • How data modelers can proactively connect into data governance

Laura Madsen is a global data strategist, keynote speaker, and author.  She advises data leaders in healthcare, government, manufacturing, and tech.  Laura has spoken at hundreds of conferences and events inspiring organizations and individuals alike with her iconoclastic disrupter mentality.  Laura is a co-founder and partner in a Minneapolis-based consulting firm Moxy Analytics to converge two of her biggest passions: helping companies define and execute successful data strategies and radically challenging the status quo.

Most organizations are living an urgency to get value out of their data. They are jumping directly to implement MDM initiatives, to move to the cloud, or to implement advanced analytics without a good Data Strategy. Few organizations are giving Data Modeling the place it deserves in the foundation represented by Data Management. Data Modeling must start with a Data Modeling Strategy as part of a holistic and integrated Data Strategy. This session presents HOW to produce Data Strategies and in particular, Data Modeling Strategy with the PAC (Pragmatic, Agile, and Communicable) Method.

Attendees will take away:

  • The relevance of a Data Management Maturity Model as an anchor for a Data Strategy
  • The power of a Canvas to communicate Data Strategy 
  • The Data Strategies Framework
  • The Data Modeling Strategy Canvas

Marilu Lopez (María Guadalupe López Flores, a Mexican U.S. Citizen born in Los Angeles, California, but raised in Mexico City from age 4) dedicated over 30 years to corporate life in the financial sector before becoming a Data Management consultant and trainer. She pioneered the Enterprise Architecture practice in Mexico, which led her to focus on Data Architecture and, from there, expand her practice to Data Management, specializing in Data Governance, Metadata Management, and Data Quality Management. Through decades she suffered the lack of a holistic and comprehensive Data Strategy.  Her passion for Data Management has moved her to dedicate her volunteer work to DAMA International with different roles, from the president of the DAMA Mexico Chapter to VP of Chapter Services.

Without relationships, there would be no useful databases. However, we tend to focus on the entities and devote much less attention and rigor to the relationships in our data models. While you already are familiar with “type of” and “part of”, we will also explore “transformation of”, “instantiated as”, “located in”, and other universal relationships that occur in data models across many industries. You will see how to create domain-specific relationships derived from these universal relationships. Using universal relationships will make your data models more consistent, help to recognize patterns, and help your model audience understand both your models and the world better. After this presentation, you will think about relationships differently and treat them as first-class citizens in your data models!

Learning objectives:

  • Apply universal relationships to your data models
  • Help your subject matter experts see patterns by using universal relationships
  • Improve your data integration projects by mapping both the source and target data models to one with universal relationships

Norman Daoust founded his consulting company Daoust Associates in 2001. He became addicted to modeling as a result of his numerous healthcare data integration projects. He was a long-time contributor to the healthcare industry standard data model Health Level Seven Reference Information Model (RIM). He sees patterns in both data model entities and their relationships. Norman enjoys training and making complex ideas easy to understand.

Subject areas for data model patterns are these:  People and Organizations, Geography, Assets, Actives, and Time.  Other topics, however, cross these areas, and thus are in the category “metadata”.  The first is accounting, which is itself a modeling technique that dates back 400 years and addresses an entire enterprise.

The second metadata area is information resourcesThis area encompasses books, e-mail communications, photographs, and videos, etc.—as well as works of both visual and musical art. In each case, the artifact’s existence must be documented and catalogued.  Note that in each case, the artifact and its category is also about something (anything) else in the world. Libraries and museums have been, for the last two hundred years or so, been developing techniques for cataloguing the knowledge encompassed in these artifacts—even as, in the past 50 years, the technology for supporting this has completely changed.

This presentation brings together the author’s work on data model “patterns” (specifically, information resources), with the attempts by the International Federation of Library Associations and Institutions to use modern technology to support these cataloguing efforts.

In the Information Industry since it was called “data processing”, David Hay has been producing data models to support strategic and requirements planning for more than forty years. As Founder and President of Essential Strategies International for nearly thirty of those years, Mr. Hay has worked in a variety of industries and government agencies. These include banking, clinical pharmaceutical research, intelligence, highways, and all aspects of oil production and processing. Projects entailed defining corporate information architecture, identifying requirements, and planning strategies for the implementation of new systems. Mr. Hay’s most recent book,  “Achieving Buzzword Compliance: Data Architecture Language and Vocabulary” applies the concepts of consistent vocabulary to the data architecture field itself.

Previously, Mr. Hay wrote, “Enterprise Model Patterns: Describing the World”, an “upper ontology” consisting of a comprehensive model of any enterprise—from several levels of abstraction. Thia is the successor to his ground-breaking 1995 book, “Data Model Patterns: Conventions of Thought”–the original book describing standard data model configurations for standard business situations. In addition, Mr. Hay has written other books on metadata, requirements analysis, and UML. He has spoken at numerous international and local data architecture, semantics, user group, and other conferences.

Enterprises want to move away from canned reports or even guided analytics where a user simply filters a value or changes the sorting to self-service analytics in order to make more timely and informed data-driven decisions. However, the most graphically stunning and easy to use UI’s can be for naught if the data isn’t right or it isn’t structured in a manner that is easy to use and prevents double counting. To enable “trusted” self-service analytics by business users, the importance of ensuring that the data aligns with the business nomenclature, rules, and relationships is paramount, in addition to solid balancing and auditing controls. 

The key to ensuring alignment of the data to the business is Conceptual Data Modeling – modeling business entities and their relationships to understand granularity, integration points, and to reconcile semantics. When enabling self-service analytics at the enterprise level developing the Conceptual Data Model becomes even more important as it aids in alignment with data governance to ensure an enterprise view.

You will:

  • Understand why Conceptual Data Modeling is necessary to enable self-service analytics
  • Gain an understanding of what a Conceptual Data Model is and isn’t
  • Learn techniques for eliciting the business information needed to create the Conceptual Data Model Learn techniques to create Conceptual Data Models

Pete Stiglich is the founder of Data Principles, LLC  – a consultancy focused on data architecture/modeling, data management, and analytics.  Pete has over 25 years of experience in these topics as a consultant managing teams of architects and developers to deliver outstanding results and is an industry thought leader on Conceptual/Business Data Modeling.  My motto is “Model the business before modeling the solution”. 

Pete is also VP of Programs for DAMA Phoenix and holds the CDMP and CBIP certifications at the mastery level.

Unlock the full potential of your meticulously crafted data models by seamlessly integrating them into the broader data ecosystem. In this session, we explore how to transition from investing time, money, and effort into building high-quality data models to effectively leveraging them within a dynamic Data Marketplace. Discover the transformative journey that organizations must undertake to extend the reach of their data models, catering not only to technical audiences but also to a broader spectrum of users. Join us as we navigate the steps involved in making your data models accessible for Data Shopping, unraveling Data Lineage, and curating a comprehensive Data Inventory. Our presentation will demystify the process, breaking down complex concepts into easily understandable components for both technical and non-technical stakeholders. The focal point of our discussion revolves around the pivotal role your data models play in enhancing Data Literacy strategies. Learn how to bridge the gap between technical intricacies and broader audience comprehension, ensuring that your data models become a powerful asset in fostering organizational-wide understanding and utilization of valuable information. Key Takeaway: Elevate your Data Literacy strategy by strategically integrating and showcasing your data models, transforming them from technical artifacts into catalysts for organizational-wide data exploitation. Kevin Weissenberger is Senior Technology consultant at Sandhill Consultants. He has over 30 years’ experience in IT from Telecommunications to Systems Analyst to Technical Support, consultant, and trainer. Trainer and Consultant in all aspects Data Modeling and Data Architecture with particular emphasis on use of erwin Data Modeler.

Prepare yourself and your mind for the day so you can make the most of it!

This presentation will cover the what, why and how of meditation and how this applies to data modeling.
Come invigorate yourself, reduce stress, develop your mind, and learn about and practice meditation.
Len Silverston, who is not only a data management, data governance, and data modeling thought leader, but is also a fully ordained Zen priest and spiritual teacher, will provide this brief overview of what meditation is, why it is important, how to meditate, and lead a sitting meditation and moving meditation (Qigong) session.

Some ask, ‘What does Zen have to do with data modeling’. The answer is ‘everything’. Find out why.

This will be an enlightening, wonderful session to start your day in a relaxed and receptive state of mind!

Len Silverston is a best-selling author, consultant, and speaker with over 35 years of experience in helping organizations around the world integrate data, systems, and people.

He is an internationally recognized thought leader in the fields of data management as well as in the human dynamics that lies at the core of synthesizing and effectively using information. He is the author of the best-selling ‘The Data Model Resource Book’ series (Volumes 1, 2, and 3), which provide hundreds of reusable data models and have been translated into multiple languages. Mr. Silverston’s company, Universal Mindful, LLC (www.universalmindful.com), focuses on the cultural, political and human side of data management.

He is also a fully ordained Zen priest and life coach. He provides training, coaching, corporate mindfulness workshops, and retreats through his organization, ‘Zen With Len’ (www.zenwithlen.com).

Technologies

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

Performance is often a key requirement when modeling for MongoDB or other NoSQL databases. Achieving performance goes beyond having a fast database engine. It requires powerful transformations in your data model. While I developed schema design patterns for MongoDB a few years ago to achieve better performance, the publication of the book “MongoDB Data Modeling and Schema Design” with Steve Hoberman and Pascal Demarets led to additional work on these patterns. This presentation discusses the current state of the schema design patterns, the evolution of some, and the introduction of new ones.

You will:

  • Recall the differences between SQL and NoSQL that make schema design patterns necessary for many projects.
  • List the most important schema design patterns for achieving good performance with MongoDB.
  • Describe the steps to migrate an application using the schema versioning pattern.

Daniel Coupal is a Staff Engineer at MongoDB. He built the Data Modeling class for MongoDB University. He also defined a methodology to develop for MongoDB and created a series of Schema Design Patterns to optimize Data Modeling for MongoDB and other NoSQL databases.

In this presentation, we will explore the benefits and pitfalls of NoSQL data modeling at American Express.  We will dive into how the to efficiently work through the modeling process with the application teams, and strategies used for a successful implementation.   We will also highlight the challenges the team faced, as well as the flexible schema and query capabilities enabled in a successful implementation of the solution across the platform. Finally, we will provide some best practices for designing and implementing nosql data models, including tips for schema design, performance optimization. Our goal is to present the audience a solid understanding of how NoSql modeling can be used to create seamless experiences across multiple database platforms, and how to design a Nosql schema that delivers exceptional performance to your business.

Benefits and Key Takeaways:

  • How to think in an nosql modeling perspective
  • Enforce reusability
  • Optimize solutions for use cases
  • Reduce time to market

Fully Communication Oriented Information Modeling (FCOIM) is a groundbreaking approach that empowers organizations to communicate with unparalleled precision and elevate their data modeling efforts. FCOIM leverages natural language to facilitate clear, efficient, and accurate communication between stakeholders, ensuring a seamless data modeling process. With the ability to generate artifacts such as JSON, SQL, and DataVault, FCOIM enables data professionals to create robust and integrated data solutions, aligning perfectly with the project’s requirements.

You will learn:

  • The fundamentals of FCOIM and its role in enhancing communication within data modeling processes.
  • How natural language modeling revolutionizes data-related discussions, fostering collaboration and understanding.
  • Practical techniques to generate JSON, SQL, and DataVault artifacts from FCOIM models, streamlining data integration and analysis.

Get ready to be inspired by Marco Wobben, a seasoned software developer with over three decades of experience! Marco’s journey in software development began in the late 80s, and since then, he has crafted an impressive array of applications, ranging from bridge automation, cash flow and decision support tools, to web solutions and everything in between.

As the director of BCP Software, Marco’s expertise shines through in his experience in developing off-the-shelf end products, automate Data Warehouses, and create user-friendly applications. But that’s not all! Since 2001, he has been the driving force behind CaseTalk, the go-to CASE tool for fact-oriented information modeling.

Join us as we delve into the fascinating world of data and information modeling alongside Marco Wobben. Discover how his passion and innovation have led to the support of Fully Communication Oriented Information Modeling (FCO-IM), a game-changing approach used in institutions worldwide. Prepare to be captivated by his insights and experience as we explore the future of data modeling together!

AI is becoming ubiquitous across every industry, product, and service we encounter both at work and in our private lives. The lifeblood of all these AI models is the data that is used to train them. In this presentation we will discuss the role of data modelers in AI training and the potential for new AI tools to augment the work of human data modelers.  

You will learn: 

  • How good data models can reduce risk and speed up development of AI models 
  • What are the emerging legal and regulatory requirements for AI Governance and what new systems and attributes will be needed to support them 
  • How AI can be used to augment human data modelers and boost efficiency 

Kimberly Sever is an independent consultant with a 30 year career in the financial technology industry. She has worked on numerous data modeling projects including design of Bloomberg’s B-Pipe data feed, development of the BSID and FIGI security identification systems, and most recently, identification of critical metadata attributes for data and AI governance. She is currently engaged in designing a risk scoring framework for AI models and writing policies to guide development of safe and transparent AI systems.

With Oracle 23c new duality views, documents are materialized, generated on demand, not stored as such.  Duality views give your data both a conceptual and an operational duality: it’s organized both relationally and hierarchically.  You can base different duality views on data stored in one or more relational tables, providing different JSON hierarchies over the same, shared data.  This means that applications can access (create, query, modify) the same data as a set of JSON documents or as a set of related tables and columns, and both approaches can be employed at the same time.

In this session you will learn how to:

  1. Not be forced into making compromises between normalization and NoSQL
  2. Eliminate data duplication and the risk of inconsistencies when working with JSON documents
  3. Design the optimal schema for duality views and avoid object-relational impedance mismatch for your developers

Pascal Desmarets is the founder and CEO of Hackolade, a data modeling tool for NoSQL databases, storage formats, REST APIs, and JSON in RDBMS. Hackolade pioneered Polyglot Data Modeling, which is data modeling for polyglot data persistence and data exchanges. With Hackolade’s Metadata-as-Code strategy, data models are co-located with application code in Git repositories as they evolve and are published to business-facing data catalogs to ensure a shared understanding of the meaning and context of your data. Pascal is also an advocate of Domain-Driven Data Modeling.

Beda Hammerschmidt studied computer science and later earned a PhD in indexing in XML database. He joined Oracle as a software developer in 2006. He initiated the support for JSON in Oracle and is co-author of the SQL/JSON standard. Beda is currently managing the groups supporting semi-structured data in Oracle (JSON, XML, Full Text, etc).

Have you ever noticed how significant the leap of detail is from a Conceptual Data Model to a Logical Data Model is?  Have you ever wondered if there is a middle ground where you are able to illustrate the highest and most critical level of data elements without unrolling a spider-web of minutia?  Well, I believe there is!

Taken from my vast experience of data modeling and my recent exposure to collecting metadata for Critical Data Elements (CDEs), I will be introducing a new CDE Data Model that can be utilized to align business and technical professionals at a high enough level for everyone’s understanding, but at enough of a detailed level that everyone is aligned on the valuable information that is stored without getting out their magnifying glass!

Specifically, we will cover:

  • Reviewing the advantages and disadvantages of different levels of data models
  • Introducing the CDE Data Model
  • Explaining why the CDE Data Model provides the most advantages, while lessoning the disadvantages of other data model approaches

Bruce Weidenhamer has an extensive background, including decades of multi-level data modeling experience, utilizing various methods and tools.  He also has worked as a database administrator (DBA) and most recently as a data steward within a data governance and management team at American Express.  Bruce’s vast experience has been within various business units within American Express for the last 30+ years and has a deep love of presenting his passion for data architecture, data modeling, and database design.  Bruce also holds a bachelor’s degree from DeVry Institute, specializing in Computer Information Systems.  Outside of work, Bruce has designed several hiking/biking trails; is a Nebraska-native; and can be found helping his wife spoil their three dogs in their home or cabin in the Phoenix, Arizona area.

In the rapidly evolving landscape of data management, the synergy between agility and security has become paramount. Join us for an insightful presentation where we delve into the convergence of cutting-edge technology and strategic solutions that empower enterprises to achieve both data agility and ironclad security.

In this exclusive session, we will shine a spotlight on two dynamic pillars driving this transformation:

  1. Datavault Builder’s Business Model-Driven Data Warehouse Automation Solution: Embrace the future of data warehousing with Datavault Builder’s revolutionary approach. Discover how their innovative automation solution leverages business models to streamline the end-to-end process, from raw data ingestion to analytics-ready insights. Explore how Datavault Builder’s unique methodology enhances efficiency, reduces development cycles, and empowers organizations to harness the true potential of their data assets.

 

  1. infoSecur: Elevating Data Access and Business Rule Management: The backbone of any data-driven enterprise is a robust security framework. Enter infoSecur, a cutting-edge solution that seamlessly integrates with Datavault Builder’s ecosystem. Delve into how infoSecur fortifies data access with advanced authentication and authorization controls, ensuring that only authorized personnel interact with sensitive information. Uncover how infoSecur enhances compliance by providing a comprehensive suite of tools for managing and enforcing business rules across the data lifecycle. Eliminate report sprawl while enabling your organization to share data more broadly than ever before while still protecting it fiercely.

 

Key Takeaways:

  • Synergizing Agility and Security: Understand how Datavault Builder’s automation and infoSecur’s security integration create a harmonious balance between agility and data protection.
  • Efficiency Redefined: Learn how business model-driven automation accelerates data warehousing processes and drives operational efficiency.

Petr Beles has profound experience in data warehouse projects in the telecommunications, financial and industrial sectors. For several years he has been successfully implementing projects with Data Vault as a Senior Consultant. After recognizing the need for automation with Data Vault, he and his colleagues started to develop an automation tool, culminating in the creation of the Datavault Builder. Petr Beles is today CEO of 2150 Datavault Builder AG.

Michael Magalsky is founder and Principal Architect at infoVia (http://info-via.com).  He draws from a quarter-century of software, data, and security experience at global leaders in healthcare, manufacturing, insurance, education, and services industries.  Now, as an thought leader and senior consultant with infoVia specializing in Information Architecture, Data Warehouse Modeling and Implementation, and Data Security, his team helps other organizations benefit from the lessons learned in the data journeys with dozens of successful clients in the United States and abroad.  Mike’s passion for results-oriented, value-added data integration and governance has led to speaking engagements in numerous national forums.  He lives with his family in beautiful Boise, Idaho, where he enjoys spending his weekends in the Idaho outdoors. 

Case Studies

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

Welcome to “Whose Data Is It Anyway?”, the show where everything is made up and the points don’t matter. That’s right, the points are like the number of times you don’t get a straight answer to “Who is the owner of this data…?!?”

As the Masters of Metadata, we know that ownership and accountability of data is foundational to Data Management. But in my experience of asking this question, I have received responses that vary from “well, it’s the company’s data- so nobody really owns it” to “this is my data- so nobody else can touch it” (also known as data “mine”-ing…)! In the end, we need to come up with a good, better, or best solution to formalizing data ownership. So come join me as we explore topics such as:

 

  1. What does data ownership actually mean?
  2. Different solutions to formalizing accountability
  3. The importance of decision rights

Deron Hook is a certified data management professional who is passionate about people, data, and formulating strategies that utilize both to their highest potential. He is currently the Director of Data Governance and Management at American Express and has experience creating foundational Data Governance programs at various companies in the financial services industry. In his current role, he is building out the Data Governance team in the Global Merchant and Network Services business unit at American Express using foundational initiatives such as metadata management, data stewardship, and data quality management.

Deron is a recognized data leader, has served on various boards, and was a Chief Data Officer (CDO) summer school graduate by Carruthers & Jackson. He received his MBA with a concentration in Data Analytics from Purdue University and is also a proud Brigham Young University graduate. He and his wife have five children and live in Phoenix, Arizona.

A solid design is the “blue-print” of a well-crafted database.  It is crucial for the success of any business process and brings more clarity for all stakeholders involved.  Well defined business data elements add to that clarity.  To elevate a database design to the next level, it’s essential to consider how the naming of data elements enable quick understanding even before reading the detailed definition.  This aids in absorbing the finer points of the big picture, which in turn can have an impact on the design for efficient access, smooth processing, and easier analysis. Self-documenting element names that succinctly represent the intended purpose are further enhanced with the use of well-known “classwords” at the end of data element names.  Leveraging classwords adds a touch of elegance and sophistication (or “class”) to your data models, assuring your business partners the blueprint aligns with current requirements and a firm foundation for those of the future.

In this session, we will explore the significant role classwords have in identifying and describing the general use of data elements, before incorporating prime words and modifier words. Through diverse examples, we will analyze different scenarios to help you in the application of the most suitable classwords for various situations.

Steve Sewell graduated from Illinois State University in Business Data Processing, where he gained expertise in various programming languages, requirements gathering, and normalized database design. With a career spanning over three decades in the insurance industry, Steve has excelled in many roles including his most recent as a Senior Data Designer at State Farm.  His current work involves providing strategic guidance for enterprise-wide initiatives involving large-scale Postgres and AWS implementations, while adhering to best practices in database design. Steve is actively involved in imparting new Data Designers with the knowledge of data modeling best practices.

Logical Data Modelers are the bridge to helping the business understand the value of their data. But what happens when a business can’t find a Logical Modeler to delve into, document, and clearly communicate the business’s needs? Can experienced Data Modelers customize and prioritize what knowledge is most important and train a proficient Logical Modeler in less than a month? YES!

Learn how three data modelers with very different data modeling backgrounds collaborated to build a flexible plan and trained a proficient Logical Data Modeler in under three weeks. Not once, but twice! Our success was a repeatable, customizable process that met our client’s near-term business objectives and helped achieve our long-term goals.

You Will Learn:

  • Creative problem solving is a natural skill for Data Modelers
  • How to respond when the universe throws a big wrench into the works
  • Individual training content designed for each person based on their background

Laurel Sturges, a seasoned data professional, has been an integral part of the tech community helping businesses better understand and utilize data for over 35 years. She refers to problem solving as an adventure where she really finds passion in the process of discussing and defining data, getting into all the details including metadata, definitions, business rules and everything that goes along with it.

Laurel is an expert in creating and delivering quality business data models and increasing communication between key business stakeholders and IT groups. She provides guidance for clients to make informed decisions so her partners can build a quality foundation for success.

She has a diverse background serving in a multitude of roles educating individuals as a peer and as an external advisor. She has served in many industries like manufacturing, aviation, and healthcare. Her specialization is relational data theory and usage of multiple modeling tools.

Outside of the data world, Laurel is learning to garden and love to can jams and fresh fruits and veges. Laurel is an active supporter of Special Olympics of Washington. She has led her company’s Polar Plunge for Special Olympics team for 9 years, joyfully running into Puget Sound in February!

Université du Québec à Montréal (UQAM) is a large university serving about 35k students every semester. Five years ago, the university decided to build in-house a new Academic Management System (AMS). I was hired as the lead data modeller for this project that is still ongoing. The first module delivered is to manage the programs and the activities.

In this talk, I will present how the project team tackled the data modelling and the importance of business glossary and data quality rules early in the development cycle. We will review some design decisions to address specific challenges, how to provide flexibility in a relation model, the reality of retrofits when using an Agile methodology and what happens when the data model exposes flaws in the current business processes.

Michel is an IT professional with more than 40 years of experience, mostly in business software development. He has been involved in multiple data modelling and interoperability activities. Over the last 7 years, as an independent consultant, he worked on data quality initiatives, metadata management, data modelling and data governance implementation projects. He has a master’s degree in software engineering, member of DAMA International since 2017 and is certified CDMP (master) since 2019. He currently consults on a large application development project as lead data management/modelling, teaches introduction to data governance, data quality and data security at Université de Sherbrooke and translates books on data governance and management. He is currently VP Professional Development for DAMA International.

FastChangeCo is a fictitious company founded in the early 20th century. Struggling with today’s rapidly changing business requirements and dealing with more and more changes in source systems, FastChangeCo repeatedly tried to extend the data warehouse with state-of-the-art technologies to handle all these changes. But at the end of the day, they could not keep up with the ambitious goals to meet all the requirements. To meet all these requirements, FastChangeCo created a high-level vision – a goal against which they’ll double-check all decisions in the upcoming project. The vision is to be more flexible, more agile, faster, and less complex in all upcoming decisions, tasks, and implementations. This talk illustrates why and how FastChangeCo’s Center of Excellence is now using Data Vault’s data modeling methodology to get closer to its vision goal.

Learning objectives:

  • Understand Data Vault fundamentals
  • Enable Data Vault to be used in an agile context.
  • Think of Data Vault as part of a data solution.

Dirk Lerner is an experienced independent consultant and managing director of TEDAMOH. With more than two decades of experience in BI projects, he is considered a global expert on BI architectures, data modeling and temporal data. Dirk advocates flexible, lean, and easily extendable data warehouse architectures.

Through the TEDAMOH Academy, Dirk coaches and trains in the areas of temporal data, data modeling certification, data modeling in general, and on Data Vault in particular.
As a pioneer for Data Vault and FCO-IM in Germany he wrote various publications, is a highly acclaimed international speaker at conferences and author of the blog https://tedamoh.com/blog.

Robert kind of always knew that something was different about him.  He was good, very good with numbers and logic.  Not so good with people.  He was never quite sure why.  Until…

 This presentation will look at some of the weaknesses and strengths of Robert that both hindered and helped him as he dealt with the data around him and the world around him.

This presentation will look at some of the skills and abilities that separated him from the pack, both in positive and negative ways.

Raymond has been a data management professional for over 25 years. He has worked in the development and design of databases for a national government organization. He has designed and developed databases for personnel, financial, corporate travel, and business workflow systems. He has also been active in process improvement initiatives. Raymond has been a reviewer of several data management books, including the second edition of the Data Management Body of Knowledge (DMBOK).

VDAB is the public employment service of Flanders. If you are entitled to live and work in Belgium, the VDAB agency supports you in seeking a job, training possibilities, or finding information for unemployment benefits. Together with the Flemish Data Utility Company, VDAB is working on a secure data sharing platform where citizens can choose which data will share with which organization and for what period: a “personal data vault.” The idea behind data vaults originates from the mind of Sir Tim Berners-Lee (the founder of the World Wide Web), who launched this technology under the name Solid. The Solid platform combines software and operational processes to prevent, detect and respond to security breaches. The platform allows citizens to share data safely and easily from their Personal Data Vaults with various service providers and applications. Personal Data Vaults come with challenges to be compliant with global rules:

  • Establish a trusted data ecosystem.
  • Handle most sensitive citizen data without leaking confidential personal information.
  • Operates in line with GDPR.
  • Security and privacy by design, including encryption of data and supporting processes to protect personal data.
  • In compliance with the Solid Specifications, EU-based cloud infrastructure, PCI, SOC 2, ISO 27001 certified

Personal Data Vaults with secure data sharing where a citizen can choose which data he will share, with which organization and for what period.

What you will learn:

  • OSLO (Open Standards for Linked Organisations), a process and methodology for the development and recognition of a data standard
  • SOLID, store data securely in decentralized data stores (Pods)
  • Organize data around individuals and decouple from applications.

Ivan Schotsmans is a senior Enterprise Data Architect at VDAB. He has 30+ years of experience in data management on large, multi-cultural programs in diversified roles from developer to project manager to subject matter expert. The past decades he worked for companies like Nike, AB Inbev, Deloitte, and Bayer. In recent years he has been active in government organizations or BEL-20 companies. 

During his career he played an active role in many global organizations like IAIDQ, TDWI or IIBA. Locally he is the face of BI-Community.org focusing on the Belgian data management market.

More recently, alongside his daytime job, he provides advice to local start-ups in the data management space and supports vendors to enter the European market.

Ivan is also an esteemed speaker at local and international seminars on a range of data management topics, like information quality, data modelling, data governance, etc…

Semantics

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

Tired of building logical models that aren’t used? See how you can use JSON logical models, created with Hackolade, to code-generate semantic views that translate production data from systems into business language for use by data analysts and data scientists. Adding custom properties to Hackolade allows you to map between raw source system data copies in your EDW and desired business terms. Instead of huge, flat tables like “ACCOUNT” from Sales Cloud, expose “CUSTOMER” and even introduce logic not available in source data.

Objectives:

  1. Enable auto-generated semantic views in the data warehouse to expose business terms.
  2. Enable loose coupling between production systems and semantic views, absorbing at least some of the impact of production system changes.
  3. Compare pass-through views to generating semantic views from logical JSON-based models.
  4. Enable standards across logical models so that, for instance, the “ID” in the semantic view represents the logical key, regardless of the name in the underlying system.
  5. Compare JSON attribute name grouping to flat naming or hand-coded views.

Rob Garrison is an experienced data modeler and architect. He currently works at Compassion International where he has the privilege of not managing anyone but spending his time on technical work. His previous roles were at Nike, DISH, WebMD, and Fiserv. Rob has presented at DMZ, PASS, and many local conferences and user groups. He also was the technical editor for Steve Hoberman’s book “Data Modeling for MongoDB: Building Well-Designed and Supportable MongoDB Databases” and has had articles published in Simple-Talk, Teradata Developer Exchange, and Database Journal.

The primary goal of this presentation is to demonstrate how data quality checks, predictive modeling techniques, and a knowledge graph was used to develop an effective credit card fraud detection framework. This comprehensive approach offers enhanced accuracy, robustness, and explainability in detecting fraudulent transactions.

The key stages of the framework are:

  1. Data Cleansing to ensure the training dataset was free from errors, inconsistencies, and missing values. A high-quality training dataset was ready to be fed into the predictive model by eliminating these issues.
  2. The predictive model comprises a sophisticated ensemble approach combined with multi-fold cross-validation techniques. By leveraging multiple models and evaluating their performance across different cross-validation folds, the accuracy and reliability of the fraud detection model was significantly enhanced. This approach allowed identifying patterns and anomalies associated with fraudulent transactions.
  3. A knowledge graph was used to promote explainability and empower users to validate instances of fraud detection. Using the knowledge graph makes it possible to uncover hidden patterns, identify root causes, and easily validate the legitimacy of flagged transactions.

Learning objectives:

  1. Implement data cleansing techniques to pre-process datasets for predictive modeling purposes effectively.
  2. Employ critical thinking and evaluation skills to identify and select the optimal predictive model and feature set that yields the highest F1 score, indicating robust performance and accurate predictions.
  3. Design and implement a user-friendly interface utilizing a knowledge graph to facilitate efficient data exploration and navigation, enabling users to navigate the dataset seamlessly.

Rafid is a data modeler who entered the field at the young age of 22, holding an undergraduate degree in Biology and Mathematics from the University of Ottawa. He was inducted into the DMC Hall of Fame by the Data Modeling Institute in July 2020, making him the first Canadian and 10th person worldwide to receive this honor. Rafid possesses extensive experience in creating standardized financial data models and utilizing various modeling techniques to enhance data delivery mechanisms. He is well-versed in data analytics, having conducted in-depth analyses of Capital Markets, Retail Banking, and Insurance data using both relational and NoSQL models. As a speaker, Rafid shared his expertise at the 2021 Data Modeling Zone Europe conference, focusing on the reverse engineering of physical NoSQL data models into logical ones. Rafid and his team recently placed second in an annual AI-Hackathon, focusing on a credit card fraud detection problem. Alongside his professional pursuits, Rafid loves recording music and creating digital art, showcasing his creative mind and passion for innovation in data modeling.

The emergence of online shopping post pandemic, has accelerated the need for ecommerce companies to understand the online behavior of their customers.  Focusing on the best practices of data modeling for data practitioners, this speech will demonstrate how to focus on the right customer profile, behavior, and site experience data to influence website design, customer taxonomies, and functionality decisions to optimize the customer experience.  Through the use of general customer behavior data analysis to machine learning capabilities, this speech will guide the data practitioner through the framework of success in this business model.  Leverage a real world case study as the artifact of the speech, the audience will be able to obtain a real-world use case in how to apply these customer data modeling best practices in e-commerce.

You will learn:

  1. How to position the best data modeling method towards e-commerce experiences and customer taxonomies
  2. How to assess the appropriate data correlation methods to influence online customer behavior strategy
  3. How to identify how to provide effective insights into e-commerce journey and customer experience strategic decisions

Dr. Kyle Allison is a 20 year veteran in digital analytics, marketing & business.  He has spent time working with some of the most major ecommerce sites and retail companies in the country to include Best Buy, Dick’s Sporting Goods and The Exchange. Currently as a digital business and analytics consultant in the Doctor of Digital Strategy firm, he helps organizations balance effective digital business strategy planning with analytics & expert onsultation.  He is proficient in SAS, SPSS, Google Analytics, and Adobe Analytics. He is also a professor of Business Analytics at University of Texas at Dallas and Midland University.

Building semantic knowledge graphs that both deliver immediate value and evolve as your business model changes overtime can seem overwhelming — almost impossible. The most valuable knowledge graphs are flexible and dynamic, adapting to new data types, consumer patterns, and business requirements — but where do we start? In this session, Eliud will break down a simple formula that can jumpstart your Knowledge Graph journey: Model, Map, Connect, and Expand. We start small: identify a limited domain and model it with a semantic ontology. Map existing (or new) data to your model. Connect consumption patterns, applications, and business users to the data. Expand and evolve your data models, data consumers, and data sources using automation. Test often. Iterate. Automate.

We’ll discuss how the Model, Map, Connect, Expand formula can scale to new domains of business, allowing enterprise knowledge graphs to expand to new data and environments and evolve its models and requirements. By nature, the formula is circular, modular and interconnected: new data consumption requirements drive the need for new models, and new models can open up new analytical or operational capabilities.

We’ll end with a few case studies, showing how organizations use this formula as an ongoing strategy to transform proprietary data silos into semantic data ecosystems and future-proof their data domains for interoperability.

Eliud Polanco is a seasoned data executive with extensive experience in leading global enterprise data transformation and management initiatives. Previous to his current role as President of Fluree, a data collaboration and transformation company, Eliud was formerly the Head of Analytics at Scotiabank, Global Head of Analytics and Big Data at HSBC, head of Anti-Financial Crime Technology Architecture for U.S.DeutscheBank, and Head of Data Innovation @ Citi. In his most recent role as Head of Analytics and Data Standards at Scotiabank, Eliud led a full-spectrum data transformation initiative to implement new tools and technology architecture strategies, both on-premises as well as on Cloud, for ingesting, analyzing, cleansing, and creating consumption ready data assets.

There are specialized tools (e.g. TARQL and SPARQL Anything) for converting data in traditional tables to RDF triples. The basic idea is to understand the meaning of each column and each table, and to find the terms in the ontology to express that meaning.   This process results in a mapping from the table schema to the ontology. This mapping is executed to convert the data in the relational table to triples.  Each cell in the table results in a single triple.

The triples collectively comprise the knowledge graph. The data may be queried producing a table of exactly what you want. The query language is called, SPARQL. 

Michael Uschold has over thirty years’ experience in developing and transitioning semantic technology from academia to industry. He pioneered the field of ontology engineering, co-authoring the first paper and giving the first tutorial on the topic in 1995 in the UK. 

As a senior ontology consultant at Semantic Arts since 2010, Michael trains and guides clients to better understand and leverage semantic technology.  He has built commercial enterprise ontologies in finance, insurance, healthcare, commodities markets, consumer products, electrical device specifications, manufacturing, corporation registration, and data catalogs.  The ontologies are used to create knowledge graphs that drive production applications.  This experience provides the basis for his book:  Demystifying OWL for the Enterprise, published in 2018.

During 2008-2009, Uschold worked at Reinvent on a team that developed a semantic advertising platform that substantially increased revenue. As a research scientist at Boeing from 1997-2008 he defined, led, and participated in numerous projects applying semantic technology to enterprise challenges. He received his Ph.D. in AI from The University of Edinburgh in 1991 and an MSc. from Rutgers University in Computer Science in 1982.

In Ensemble Modeling (Data Vault, Anchor Modelling, Focal Point, etc), we store all context in separate constructs. This presentation will reveal the secrets of its importance and why breaking up in multiple constructs is a benefit, not an issue! We will discuss the rule of 7, the years  1718 & 1892, why context might seem redundant while it’s not, and why history from a business perspective does not match with technical history. In other words, a whole session on how we capture context in Agile modeling. 

Remco is Vice President of International Programs at Genesee Academy. Remco is working in Business Intelligence and Enterprise Data Warehousing (EDW) with a focus on modeling and architecture including Ensemble and Data Vault modeling. He works internationally in Europe and is based in the Netherlands.
He had worked for several consulting companies in the Netherlands as a consultant in business intelligence from reporting, ETL, and modeling before starting his own companies, Coarem and BI Academy and joining Genesee Academy. He has years of experience teaching and speaking on modelling, business intelligence and data warehousing topics.

Before attempting to initiate digital improvements, your organization will need to take a look at its data-driven decision-making process. Aiken and Cesino provide executive insight into what a focused implementation looks like when an organization needs to make strategic decisions. The presentation will show how organizations rapidly realize concrete top- and bottom-line improvements directly attributable to maturing data practices. We will address key investments in technology, capabilities, and processes to achieve a sustained competitive advantage.

You will learn:

  • How to identify key practices in support of a data strategy.
  • How to align data – making it more available to inform business strategy.
  • Where to look for process improvements.

Peter Aiken, PhD is an acknowledged Data Management (DM) authority, an Associate Professor at Virginia Commonwealth University, President of DAMA International, and Associate Director of the MIT International Society of Chief Data Officers.  For more than 35 years, Peter has learned from working with hundreds of data management practices in 30 countries including some of the world’s most important. Among his 12 books are the first on CDOs (the case for data leadership), focusing on data monetization,  on modern strategic data thinking and objectively specifying what it means to be data literate.  International recognition has resulted in an intensive schedule of events worldwide (pre-Covid).  Peter also hosts the longest running data management webinar series hosted by our partners at Dataversity.  Starting before Google, before data was big, and before data science, Peter has founded several organizations that have helped more than 200 organizations leverage data–specific savings have been measured at more than $1.5B USD. His latest is Anything Awesome.

As CEO of Visible Systems Corporation, Mr. Cesino enables companies to drive their business strategy to execution – viewing their enterprise from a “design thinking” perspective when creating new products and services.Additionally, through the award of a Small Business Innovative Research (SBIR) grant, he has led international eCommerce activities – providing technical input to the development and application of Models and Assessment Tools. Mr. Cesino holds an MBA from Suffolk University’s School of Management and degrees in Computer Science and Information Technology from Boston University and Northeastern University, respectively.

Keynotes

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

It is no secret the world of data shifts and evolves.  How can people keep up with this evolving world, especially with all the hype around AI?  Come join Jordan Morrow and Chandra Donelson as they talk about the empowering world of data literacy.  Through this session they will discuss this evolving and shifting world of data and AI, what data literacy is, and how to get people excited about data from an early age so they can be more data driven in this world of data and AI.

Jordan Morrow is known as the “Godfather of Data Literacy”, having helped pioneer the field by building one of the world’s first data literacy programs and driving thought leadership. He is also the founder and CEO of Bodhi Data. Jordan is a global trailblazer in the world of data literacy and enjoys his time traveling the world speaking and/or helping companies. He served as the Chair of the Advisory Board for The Data Literacy Project, has spoken at numerous conferences around the world, and is an active voice in the data and analytics community. He has also helped companies and organizations around the world, including the United Nations, build and understand data literacy.

When not found within his work of Data, Jordan is married with 5 kids. Jordan loves fitness and the mountains, entering and racing in multiple ultra-marathons, and loves to travel with his wife and family. Jordan loves to read, often reading (or using Audible) to go through multiple books at a time. Jordan is the author of three books: Be Data Literate, Be Data Driven, and Be Data Analytical.

Can you imagine a world without some form of extract, transform, and load, or system integration copy and manipulate through APIs? Learn through case studies as Dave covers:

  • Challenges with traditional enterprise data landscape
  • Application Centric vs. Data Centric
  • Model Driven principles at the core
  • Prerequisites to achieve

Dave McComb is the President and co-founder of Semantic Arts. He and his team help organizations uncover the meaning in the data from their information systems. Dave is also the author of “The Data-Centric Revolution”, “Software Wasteland” and “Semantics in Business Systems”. For 20 years, Semantic Arts has helped firms of all sizes in this endeavor, including Proctor & Gamble, Goldman Sachs, Schneider-Electric, Lexis Nexis, Dun & Bradstreet, and Morgan Stanley.

Prior to Semantic Arts, Dave co-founded Velocity Healthcare, where he developed and patented the first fully model driven architecture. Prior to that, he was a part of the problem.

Mini-Hackathons

We will tackle these four challenges in teams and share our findings

Skills

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

Moderators: Pascal Desmarets and Marco Wobben

What are the purposes (if any) of a normalized logical data model in a NoSQL world? If the LDM offers value, provide specific examples. If the LDM just gets in the way, explain why.

Technologies

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

Moderator: Kim Sever

How can data professionals leverage LLMs and related AI technology? Identify at least five use cases with examples.

Growth

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

Moderators: Chandra Donelson and Jordan Morrow

How can we ensure projects follow good design principles, even with limited data modeling resources? There are only so many data modelers. How can we educate others (Agile practitioners, management, NoSQL developers, etc.) to be more data-focused. Provide an approach involving education, assessment, and measurement.

Semantics

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

Moderator: Eugene Asahara

What is the role of a traditional data modeler in the world of semantics? Be specific on exactly when and where data modelers can add value.

Post-conference Workshops

Skills

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

Unlock the potential of your data management career with the Certified Data Management Professional (CDMP) program by DAMA International. As the global leader in Data Management, DAMA empowers professionals like you to acquire the skills, knowledge, and recognition necessary to thrive in today’s data-driven world. Whether you’re a seasoned data professional or an aspiring Data Management expert, the CDMP certification sets you apart, validating your expertise and opening doors to new career opportunities.

CDMP is recognized worldwide as the gold standard for Data Management professionals. Employers around the globe trust and seek out CDMP-certified individuals, making it an essential credential for career advancement.

All CDMP certification levels require approving the Data Management Fundamental exam. This workshop is aimed at letting you know what to expect when taking the exam and how to define your best strategy to answer it. It is not intended to teach you Data Management but introduce you to CDMP and to do a brief review of the most relevant topics to keep in mind. After our break for lunch, you will have the opportunity to take the exam in its modality of PIYP (Pay If You Pass)!

Through the first part of this workshop (9:00-12:30), you will get:

  • Understanding of how CDMP works, what type of questions to expect, and best practices when responding to the exam.
  • A summary of the most relevant topics of Data Management according to the DMBoK 2nd Edition
  • A series of recommendations for you to define your own strategy on how to face the exam to get the best score possible
  • A chance to answer the practice exam to test your strategy

 

Topics covered:

  1. Introduction to CDMP
  2. Overview and summary of the most relevant points of DMBoK Knowledge Areas:
    1. Data Management
    2. Data Handling Ethics
    3. Data Governance
    4. Data Architecture
    5. Data Modeling
    6. Data Storage and Operations
    7. Data Integration
    8. Data Security
    9. Document and Content Management
    10. Master and Reference Data
    11. Data Warehousing and BI
    12. Metadata Management
    13. Data Quality

 

3. Analysis of sample questions

We will break for lunch and come back full of energy to take the CDMP exam in the modality of PIYP (Pay if you Pass), which is a great opportunity.

 

Those registered to this workshop will get an Event CODE to purchase the CDMP exam with no charge before taking the exam. The Event CODE will be emailed along with instructions to enroll in the exam. Once this is done, access to the Practice Exam is available, and strongly recommended to execute it as many times as possible before the exam.

 

Considerations:

  • PIYP means that if you approve the exam (all exams are approved by getting 60% of right answers) you must pay for it (US$300.00) before leaving the room, so be ready with your credit card. If you are expecting a score equal or above 70 and you get 69, you still must pay the exam.
  • You must bring your own personal device (laptop or tablet, not mobile phone), with Chrome navigator.
  • Job laptops are not recommended as they might have firewalls that will not allow you to enter the exam platform.
  • If English is not your main language you must enroll in the exam as ESL (English as a Second Language), and you may wish to install a translator as Chrome extension.
  • Data Governance and Data Quality specialty exams will also be available

 

If you are interested in taking this workshop, please complete this form to receive your Event CODE and to secure a spot to take the exam.

Technologies

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

Building on an understanding of Large Language Models (LLMs) such as OpenAI’s ChatGPT, Google’s Bard, Meta’s LLaMA, and more, this immersive workshop is tailored for conference attendees seeking to enhance their LLM prompting skills.

This hands-on experience kicks off with a brief recap of the architecture and functionality of LLMs, ensuring that all participants are on the same page. The core of the workshop is dedicated to interactive exercises and real-world scenarios, enabling participants to craft, test, and refine their prompts with expert guidance.

Participants will navigate the art of prompt engineering, employing advanced strategies and frameworks in a live environment. Collaborative sessions will allow for peer feedback and insights, fostering a community of learners working together towards a common goal.

The workshop also includes exploration of controlling ChatGPT, using plugins, and integrating LLMs into daily work. Participants will have the opportunity to work on scenarios, applying LLMs to solve real challenges.

Designed as an extension to the “Unleashing the Power of Large Language Models” course (not a pre-requisite), this workshop is an essential next step for IT professionals ready to implement and innovate with LLMs in their daily work.

Tom saw his first computer (actually a Teletype ASR-33 connected to one) in 1968 and it was love at first sight.  He has nearly five decades of experience in the field of computer science, focusing on AI, VLDBs and Business Intelligence. He co-founded and served as CEO of a profitable software consulting and managed services firm that grew to over 100 employees. Under his leadership, the company regularly won awards for culture and revenue growth. In 2015, Niccum led the successful sale of the company to database powerhouse Teradata, where he became a regional Senior Partner for consulting delivery and later transitioned to a field enablement role for Teradata’s emerging AI/ML solutions division. He is currently a Principal Consultant for Iseyon

Niccum earned his PhD in Computer Science from the University of Minnesota in 2000, and he also holds an M.S. and B. CompSci from the same institution. His academic achievements were recognized with fellowships such as the High-Performance Computing Graduate Fellowship and the University of Minnesota Graduate School Fellowship.

In recent years, Niccum has continued his professional development with courses and certifications in areas like deep learning, signifying his commitment to staying abreast of cutting-edge technologies. His blend of academic accomplishment, entrepreneurial success, and industry expertise make him a leading figure in the integration of technology and business strategies.

Growth

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

Learn how to ANCHOR your change initiative through practical strategies on mapping your journey, getting people on board (despite limited resources), and overcoming rough seas – creating and sustaining a successful data management program.

Building a strong data management program takes more than just one or two sporadic changes – it requires a culture that allows for your new program to stay for the long haul. This workshop focuses on the people side of change leadership through the ANCHOR change model – a tried-and-tested model based on my experience in instituting data management programs.

We’ll focus on practical strategies to:

  • Create business case models to get everyone moving in the same direction
  • Get more helping hands (despite limited resources)
  • Create and execute communication plans
  • Overcome rough seas by removing frequently experienced barriers
  • Measure the success and maturity of your data governance journey
  • Turn your change initiative into an organizational habit

At the end of this session, you’ll be equipped with the handouts, skills and expertise to make sure your change initiative ship stays afloat and sailing!

Aakriti Agrawal, MBA, CDMP Practitioner, is a Manager of Data Governance at American Express. Prior to this, she helped stand up a Data Governance program at Ameritas. She has a Master’s degree with a focus in organizational and nonprofit change. On the side, she enjoys philanthropy work – she co-founded a nonprofit, Girls Code Lincoln, and is in the process of starting her second nonprofit – The Nonprofiting Org. Through her professional and philanthropic experiences, Aakriti has been able to practice effective change leadership by generating buy-in, developing processes, and creating robust programs that grow and thrive. She has been recognized with the Inspire Founders Award, the Emerging Young Leaders Award, has been nominated to the Forbes 30 Under 30 List, and serves on numerous boards of national and international nonprofits. 

Semantics

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

As is well known in the data modeling community, successful data modeling is paramount to the success of data initiatives. This fact is often overlooked, which is a key reason for the high failure rate of data initiatives. This is at least partly due to data modeling being considered too waterfallish (top-down) for the agile culture.

In agile development, the role of business stakeholders is central, but they can only contribute to modeling if they genuinely understand the connection between the model and the project outcome.

This workshop is about an agile data modeling approach, which the workshop facilitator has successfully practiced with hundreds of people with no previous data modeling experience.

Key takeaways:

  • How to break a large modeling exercise into such pieces that make sense to business stakeholders.
  • How to turn the ability of business stakeholders to understand and explain the behavior of their business into an ability to define and understand data models.
  • How to bridge the gap between business problems and their accurate representations as data models.

Hannu Järvi is one of the leading Data Trainers in Northern Europe. Throughout his career, he facilitated change management initiatives across multiple large corporations and has trained over a thousand individuals on how to design data products through a conceptual modeling approach. In 2019 he co-founded Ellie.ai and works as a Customer Success Officer of the company – helping large corporations to change their data culture and ways of working. In his free time, he likes to enjoy Finnish nature and spends time with the family at his summer cabin.

Please bring your laptop for the hands-on portion of the workshop, and feel free to bring pen and paper if you prefer handwriting any notes.

Wondering how to break down data silos and connect the dots? Looking to develop AI and ML
initiatives? Semantic knowledge graphs and Ontotext’s GraphDB allows for a data foundation to
understand the meaning of the data and enables standardized data exchange, efficient data
discovery, seamless integration, and accurate interpretation. Our graph database engine uses
AI to put data in context and deliver deeper insights by interlinking information across silos and
enriching it with external domain knowledge. Join this workshop to learn gain:

  • Overview of graphs and knowledge graphs and semantic technologies
  • Overview of ontotext and GraphDB capabilities and features
  • A demo of capabilities across domains and Integration with LLMs
  • Hands on workshop, which will include:
  1. Show around GraphDB – what it is and how to use
  2. Leverage an ontology and import it into GraphDB
  3. Show visualization of the ontology
  4. Load data into GraphDB
  5. Data validation using SHACL
  6. Querying and visualizing data in GraphDB

Sumit Pal is an Ex-Gartner VP Analyst in Data Management & Analytics space.
Sumit has more than 30 years of experience in the data and Software Industry in various
roles spanning companies from startups to enterprise organizations in building,
managing and guiding teams and building scalable software systems across the stack
from middle tier, data layer, analytics and UI using Big Data, NoSQL, DB Internals, Data
Warehousing, Data Modeling, Data Science and middle tier. He is also a published
author of a book on SQLEngines and developed a MOOC course on Big Data.

Lock in the lowest prices today! Prices increase as tickets sell.

Original price was: $1,995.00.Current price is: $1,299.00.

Test your DMZ knowledge: Game #1

We have had over 500 speakers at our conferences since 2012. Do you know who the keynote speakers are and when they spoke? Take a guess and roll your mouse over the picture to see if you are right!