Sharpen your skills, expand your network, and lead your business in getting more value from its data by attending Data Modeling Zone (DMZ) 2026, March 3-5 at Oracle in Redwood City, just 20 minutes south of San Francisco.

DMZ is the only global conference dedicated entirely to data modeling. You’ll gain practical skills, fresh perspectives, and the connections you need to plan and design for systems that deliver business value. From hands-on workshops and case studies across industries to sessions on communications, data-driven AI, data mesh, and semantics, DMZ 2026 brings together the tools and techniques that are shaping the future of data. Over 50 sessions taught by speakers from over ten countries across five tracks: Foundational Modeling Skills, Intermediate/Advanced Modeling and Case Studies, Data Strategy/CDMP, AI/Semantics, and Communication.

At its core, DMZ is about solving real problems. Every year, organizations waste millions of dollars on systems that don’t deliver because business needs aren’t clearly understood. Data modeling addresses this by providing a precise language between business and IT. Attendees leave the conference equipped to reduce waste, improve communication, and deliver applications that add value instead of frustration.

Unlike other tech conferences that touch on data modeling in passing, DMZ is the only event completely dedicated to the craft. That focus means you’ll find deep sessions, experienced instructors, and a community that understands the challenges you face—whether you’re a business analyst, data scientist, data modeler, data architect, database administrator, data governance practitioner, or technologist.

DMZ offers the rare chance to step back from the daily grind and focus on your craft. Three days immersed in modeling will give you perspective, renew your motivation, and equip you with tools to bring back immediate improvements. Join us to take your data skills to the next level. It’s professional development at its best—intense, practical, energizing, and of course, fun!

DMZ 2026 Program

Lock in the lowest prices today - Prices increase as tickets sell

Original price was: $2,495.00.Current price is: $1,495.00.

Pre-conference Workshops

Foundational Modeling Skills

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

Assuming no prior knowledge of data modeling, we start off with an exercise that will illustrate why data models are essential to understanding business processes and business requirements. Next, we will explain data modeling concepts and terminology, and provide you with a set of questions you can ask to quickly and precisely identify entities (including both weak and strong entities), data elements (including keys), and relationships (including subtyping). We will discuss the three different levels of modeling (conceptual, logical, and physical), and for each explain both relational and dimensional mindsets.

Steve Hoberman’s first word was “data”. He has been a data modeler for over 30 years, and thousands of business and data professionals have completed his Data Modeling Master Class. Steve is the author of 11 books on data modeling, including The Align > Refine > Design Series and Data Modeling Made Simple. Steve is also the author of Blockchainopoly. One of Steve’s frequent data modeling consulting assignments is to review data models using his Data Model Scorecard® technique. He is the founder of the Design Challenges group, creator of the Data Modeling Institute’s Data Modeling Certification exam, Conference Chair of the Data Modeling Zone conferences, director of Technics Publications, lecturer at Columbia University, and recipient of the Data Administration Management Association (DAMA) International Professional Achievement Award.

Conceptual Data Models (CDMs) are not difficult to create and provide huge business value by increasing communication and understanding between business and IT teams. But sometimes it’s hard to know where to start. We’ll start this session by creating a CDM as a collaborative group. Next, create a CDM for your initiative and receive feedback for refinement.
You will:

  • Experience the collaborative, cross-functional nature of conceptual modeling.
  • Review CDM concepts and benefits.
  • Build a conceptual model for a chocolate shop collaboratively, just as business and IT stakeholders should work together.
  • Create your own CDM and refine.
  • Present the model to others and receive feedback to help its evolution.

Kasi Anderson has been in the data world for close to 25 years serving in multiple roles including data architect, data modeler, data warehouse design and implementation, business intelligence evangelist, data governance specialist, and DBA.  She is passionate about bridging the gap between business and IT and working closely with business partners to achieve corporate goals through the effective use of data. She loves to examine data ecosystems and figure out how to extend architectures to meet new requirements and solve challenges. She has worked in many industries including manufacturing and distribution, banking, healthcare, and retail.

In her free time, Kasi loves to read, travel, cook, and spend time with her family.  She enjoys hiking the beaches and mountains in the Pacific Northwest and loves to find new restaurants and wineries to enjoy.       

Intermediate/ Advanced Modeling and Case Studies

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

The business value of logical and physical data modeling, and other good data practices, is being increasingly challenged in today’s fast-paced business environment of Agile, NoSQL, and GenAI. But the changing business landscape has, paradoxically, created many more opportunities for data professionals to add value through data modeling, process modeling, good database design, data engineering, and other data-related practices. In this 3-hour session, we will explore material from my book Data Model Storytelling and discuss the various ways in which data practitioners can add value to both business and IT processes within their organizations. Bring an open mind and lots of questions!”

Larry Burns has worked in IT for more than 40 years as a data architect, database developer, DBA, data modeler, application developer, consultant, and teacher. He holds a B.S. in Mathematics from the University of Washington, and a Master’s degree in Software Engineering from Seattle University. He most recently worked for a global Fortune 200 company as a Data and BI Architect and Data Engineer. He contributed material on Database Development and Database Operations Management to the first edition of DAMA International’s Data Management Body of Knowledge (DAMA-DMBOK) and is a former instructor and advisor in the certificate program for Data Resource Management at the University of Washington in Seattle. He has written numerous articles for TDAN.com and DMReview.com and is the author of Building the Agile Database (Technics Publications LLC, 2011), Growing Business Intelligence (Technics Publications LLC, 2016), and Data Model Storytelling (Technics Publications LLC, 2021).

This workshop will show you a set of new capabilities that enable your data model to enable applications with the interfaces they want without duplicating data. 

From scratch, you will build a data model using a new Oracle Database feature called JSON Relational Duality and then define JSON document interfaces on these tables. This data model enables either structure to do reads and writes while the database handles consistency and access rules. 

After completing the exercises you’ll walk away with the skills to unify your data model and enable diverse application patterns, without compromise.

JP (it’s short for Jean-Pierre) Dijcks is a distinguished product manager for Oracle JSON Database. 

After a successful product management career at Oracle driving products in the data space (ETL, Data Quality and Governance, Data Warehousing, Big Data, and cloud), JP spent 6 years at Visa. At Visa he led an international product management team for Visa’s Data and AI Platform. The team managed critical data services and platforms powering Visa’s journey to provide value added services through data and AI.

Now, in his second tour at Oracle, JP can be found in California, focusing on the Oracle Autonomous JSON Database business and product. In this capability JP aims to help customers with simplifying the data tier to improve governance and AI and retain all the benefits of the JSON document development patterns.

Data Strategy/ CDMP

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

Unlock the potential of your data management career with the Certified Data Management Professional (CDMP) program by DAMA International. As the global leader in Data Management, DAMA empowers professionals like you to acquire the skills, knowledge, and recognition necessary to thrive in today’s data-driven world. Whether you’re a seasoned data professional or an aspiring Data Management expert, the CDMP certification sets you apart, validating your expertise and opening doors to new career opportunities.

CDMP is recognized worldwide as the gold standard for Data Management professionals. Employers around the globe trust and seek out CDMP-certified individuals, making it an essential credential for career advancement.

All CDMP certification levels require approving the Data Management Fundamental exam. This workshop is aimed at letting you know what to expect when taking the exam and how to define your best strategy to answer it. It is not intended to teach you Data Management but introduce you to CDMP and to do a brief review of the most relevant topics to keep in mind. After our break for lunch, you will have the opportunity to take the exam in its modality of PIYP (Pay If You Pass)!

Through the first part of this workshop (9:00-12:30), you will get:

  • Understanding of how CDMP works, what type of questions to expect, and best practices when responding to the exam.
  • A summary of the most relevant topics of Data Management according to the DMBOK 2nd Edition Revised
  • A series of recommendations for you to define your own strategy on how to face the exam to get the best score possible
  • A chance to answer the practice exam to test your strategy

Topics covered:

  1. Introduction to CDMP
  2. Overview and summary of the most relevant points of DMBoK Knowledge Areas:
    1. Data Management
    2. Data Handling Ethics
    3. Data Governance
    4. Data Architecture
    5. Data Modeling
    6. Data Storage and Operations
    7. Data Integration
    8. Data Security
    9. Document and Content Management
    10. Master and Reference Data
    11. Data Warehousing and BI
    12. Metadata Management
    13. Data Quality
  1. Analysis of sample questions

We will break for lunch and come back full of energy to take the CDMP exam in the modality of PIYP (Pay if you Pass), which is a great opportunity.

Those registered to this workshop will get an Event CODE to purchase the CDMP exam with no charge before taking the exam. The Event CODE will be emailed along with instructions to enroll in the exam. Once this is done, access to the Practice Exam is available, and strongly recommended to execute it as many times as possible before the exam.

Considerations:

  • You will receive instructions to enroll to the CDMP exam in PIYP basis
  • PIYP means that if you approve the exam (all exams are approved by getting 60% of correct answers), you must pay it (US$300.00) before leaving the room, so be ready with your credit card. If you are expecting a score equal to or above 70 and you get 69, you still must pay for the exam.
  • You must bring your own personal device (laptop, not tablet nor mobile phone) with Chrome navigator.
  • Job laptops are not recommended as they might have firewalls that will not allow you to enter the exam platform.
  • If English is not your primary language, you must indicate so when receiving the Workshop instructions by email, as this will allow you to have 20 more minutes to solve the exam (regular time is 90 minutes)
  • All the specialty exams will be available.

If you are interested in taking this workshop, please complete this form to receive your Event CODE and to secure a spot to take the exam.

Data-driven AI

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

The session commences by establishing clear expectations and providing an accessible
overview of the functioning of large language models (LLMs). It highlights the models’ strengths in generating ideas and suggesting stylistic elements, while also acknowledging their limitations, such as factual inaccuracies, tonal drift, and the prevalence of clichés.
From the outset, participants are encouraged to reflect on their own writing processes and
are introduced to the workshop’s guiding principles: transparency, consent, and ethical
responsibility.

The workshop’s core structure revolves around three practical labs. In the initial lab, participants employ artificial intelligence (AI) to generate and refine brainstorming sessions, transitioning from initial concepts to comprehensive beat sheets. The subsequent
lab transitions into drafting, where attendees experiment with techniques such as the
“style sandwich” to harmonize AI assistance with authentic voice, resulting in concise
passages and facilitating peer feedback exchanges. The concluding lab emphasizes editing
and refinement, encouraging the simultaneous comparison of AI suggestions for clarity,
conciseness, and rhythm, while incorporating peer review into the iterative process.

The workshop concludes with a focused discussion on ethics and intellectual property,
encompassing plagiarism, attribution, and the responsible utilization of sensitive data.
Participants emerge from the workshop not only with tangible deliverables—an outline or draft, a revision pass, and a personal AI use policy, but also with practical tools such as
prompt templates, revision checklists, and a comprehensive understanding of best
practices for seamlessly integrating AI into their writing process.

Horen has aPhD in Math and Biochemistry and has worked as a scientist on many top-secret government projects, as well as for MCI WorldCom, Sensis (SAAB), and MSC. He has 24 patents, including two developed for the DoD. For artificial intelligence, he specializes in automated reasoning, analysis of deep nested networks, and logical and probabilistic inference. For biochemistry, he specializes in DNA quantum tunneling, specifically studying tunneling rates.

AI is everywhere, yet organizations still fail to generate value with it, and people are cautious and afraid of what the future of AI might bring. Tiankai introduces framework of the 5Cs – competence, collaboration, communication, creativity and conscience, and through interactive exercises attended will learn practical steps how to put the human in the center of their AI strategy.

Tiankai Feng is a Data and AI leader by day, a musician by night, and an optimist at heart. His experiences span marketing analytics, business performance management, data product ownership, capability leadership, data governance, data strategy, and AI transformation. Working at TD Reply, adidas, and Thoughtworks allowed him to experience data and AI challenges from both consultant and client perspectives, helping him identify patterns in what works and what spectacularly doesn’t. Author of Humanizing Data Strategy, TEDx speaker, and frequent keynote presenter, Tiankai strongly believes in keeping humans at the center of our AI future. He often uses humor, music, and perfectly timed memes to make AI less intimidating and more approachable—because if we’re going to work with machines that sound human, we might as well have some fun with it.

Communication

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

Change is no longer the status quo, disruption is.  Over the past five years major disruptions have happened in all our lives that have left some of us reeling while others stand tall egging on more.  All people approach disruption differently.  Some seem to adjust, and quickly looked for ways to optimize or create efficiencies for the upcoming change.  Other dig their heels in, question everything and insist on all the answers, in detail, right away.  Then there are the ones that are ready and willing to take on disruption.  In this workshop you will find out which profile best suits you.  How that applies to big organizational efforts like data governance, AI and the impact on data management and data modeling.  Finally, how you can bridge the divide between these profiles to harness the disruption and calm the chaos.

  • Disruption Research
  • What is the sustainable disruption model?
  • Are you a Disrupter, Optimizer or Keeper:  Take the Quiz
  • Working with others 
  • Three take-aways

Laura Madsen is a global data strategist, keynote speaker, and author.  She advises data leaders in healthcare, government, manufacturing, and tech.  Laura has spoken at hundreds of conferences and events inspiring organizations and individuals alike with her iconoclastic disrupter mentality.  

Improv theater requires fast, flexible, and creative thinking; active listening and clear communication; and the ability to take action without fear of failure. These skills are also vital for data professionals who must collaborate across teams, present complex insights, and adapt quickly in fast-changing environments. The Improviser’s Mindset is an interactive workshop that introduces participants to the core principles of improvisation and provides a framework for unlocking creative potential and building authentic connections. Throughout the session, participants will be up on their feet playing low-pressure, laughter-filled games in partners and small groups, practicing and reflecting on the skills required for agile leadership and innovative problem-solving.

Austin Meyer is an award-winning filmmaker, educator, and member of Only People, a learning experience design studio inspired by the art & activism of John Lennon & Yoko Ono. Through his work, Meyer crafts stories and interactive learning experiences that change the way people walk through the world by inspiring empathy, curiosity, and wonder. He does through a unique lens that blends journalistic rigor and ethics alongside a spirit of play and improvisation.

As a documentary filmmaker, Meyer’s work has been featured by HBO, Hulu, Apple TV, The New York Times, National Geographic, and The Washington Post among others. He has also worked with organizations such as The United Nations, Stanford University, The North Face, and JP Morgan Chase. 

Meyer has received recognition from various outlets for his documentary work. As the winner of the New York Times’ International Reporting Fellowship with Pulitzer Prize winner Nicholas Kristof, Meyer documented the opioid crisis in the US, malnutrition in India, and human trafficking in Nepal. As a recipient of the Level 1 Grant from the National Geographic Society, Meyer is also a National Geographic Explorer. His work for National Geographic has spanned continents and subject matter, from maternal healthcare in Sub-Saharan Africa, to the refugee crisis in the Middle East, wildfire disasters in his hometown of Santa Rosa, California, and animal exploitation in the industrial food system. 

Beyond the camera, Meyer is a professional theatrical improviser. Over the past decade he has taught hundreds of workshops on applied improv & storytelling to businesses, schools, and leaders around the world. Meyer holds a BA in creative writing and MA in journalism from Stanford University.

Seamus Yu Harte is the the Head of Learning Experience Design for the Electives Program at the Hasso Plattner Institute of Design (aka the d.school) and the founder of Only People, a learning experience design studio based inspired by the art & activism of John Lennon & Yoko Ono.

Prior to Stanford d.school & Only People, Seamus was the Senior Producer for The John Lennon Educational Tour Bus, Learning Experience Designer at Digital Media Academy and Creative Director and Director of Radical Experiments at Nearpod. Project-based learning & radical collaboration have been at the core of Seamus’ entire career.

His work at the Stanford d.school includes overseeing the design, development and delivery of over 30 elective courses at the Stanford d.school every academic year—all project-based, team taught, radical collaborations that amounts to over 1,000 Stanford students and nearly 150 Faculty & Lectures in the d.school teaching community.

He currently co-teaches a course titled: How to Shoot for the Moon, a radical collaboration between the Stanford d.school, described as a “kaleidoscope of curriculum inspired by the science and art of space exploration to help students discover who they are, why they’re here, where they want go and how to experiment towards getting there.”

From Yoko Ono to David Kelley, Seamus has had the opportunity to teach and learn with world-class creatives. He holds a BS in Sound Design from SAE and a MFA in Documentary Film + Video from Stanford University where he also received Fellowships from The Stanford Institute for Creativity and the Arts (SiCA) and The San Francisco Foundation.

ORGANIZATION

Only People is a network of experts designed to help individuals, teams and organizations imagine, make and champion social change. Our methods are inspired by the life and legacy of John Lennon and Yoko Ono and informed by the science, research, and art of teaching and learning at Stanford University. In a nutshell: Only People helps people remix how the(ir) world works.

The Main Event

Foundational Modeling Skills

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

The business value of logical and physical data modeling, and other good data practices, is being increasingly challenged in today’s fast-paced business environment of Agile, NoSQL, and GenAI. But the changing business landscape has, paradoxically, created many more opportunities for data professionals to add value through data modeling, process modeling, good database design, data engineering, and other data-related practices. In this 3-hour session, we will explore material from my book Data Model Storytelling and discuss the various ways in which data practitioners can add value to both business and IT processes within their organizations. Bring an open mind and lots of questions!”

Larry Burns has worked in IT for more than 40 years as a data architect, database developer, DBA, data modeler, application developer, consultant, and teacher. He holds a B.S. in Mathematics from the University of Washington, and a Master’s degree in Software Engineering from Seattle University. He most recently worked for a global Fortune 200 company as a Data and BI Architect and Data Engineer. He contributed material on Database Development and Database Operations Management to the first edition of DAMA International’s Data Management Body of Knowledge (DAMA-DMBOK) and is a former instructor and advisor in the certificate program for Data Resource Management at the University of Washington in Seattle. He has written numerous articles for TDAN.com and DMReview.com and is the author of Building the Agile Database (Technics Publications LLC, 2011), Growing Business Intelligence (Technics Publications LLC, 2016), and Data Model Storytelling (Technics Publications LLC, 2021).

Many times, data modeling is misunderstood and, in turn, is poorly supported. How can a data modeler complete a quality model with limited funds and resources? Certainly seems like a daunting challenge, yet it can be done. It has been done.
You will learn how to:

  • Determine business priorities
  • Identify quick wins
  • Prioritize steady progress
  • Communicate progress as it relates to business goals

Raymond has been a data management professional for over 30 years. He has worked in the development, design, and management of databases for a national government organization. He has designed and developed databases for personnel, financial, corporate travel, sensor integration, and business workflow systems. He has also been active in process improvement initiatives. Raymond has been a reviewer of several data management books, including the Data Management Body of Knowledge (DMBOK).

Data Products have become a hot topic in recent years. Many architects use the term, often hoping it will make project problems disappear. However, only few define it clearly. In this 60-minute session, we’ll cut through the hype and establish a clear, working definition of Data Products. We’ll outline the essential components and requirements of a Data Product, then examine the horizontal foundations: Governance, Architecture, and People and how they work together to address real data challenges. Finally, we’ll walk through the Data Product lifecycle.

Mario Meir-Huber is an AI & Data Leader and Data Architect known for turning complex ideas into practical, scalable solutions. He is the co-author of Handbook Data Science & AI, teaches at WU Vienna, and speaks at global conferences on data strategy and data products. As a LinkedIn Learning Instructor and trusted advisor, he has guided enterprises through transformation initiatives across Data and AI. Previously a Vice President & Head of Data and earlier with Microsoft, he combines executive experience with hands-on technical depth. He is currently publishing a new book on Data Products with the renowned publisher Technics Publications.

There is a growing need to model using business terms familiar to users, i.e., semantic modeling. However, many semantic layer implementations are highly dependent on vendor tools such as BI or DWH, and the modeling rules and steps are left to the discretion of individual modelers. TM is a model-building technique based on the relational model, developed primarily for business analysis, with many proven use cases in Japan. By applying TM, it becomes possible to model business data while engaging directly with business users, thereby gaining a wide range of insights that are valuable for business process reengineering (BPR), IT implementation, and analytics.

You will learn:

– The rules and steps (syntax and semantics) for modeling business terms using the model-building technique TM

⁃   The following use cases:

      – Business analysis

      – Database re-engineering
   (reverse engineering and redesign of schemas using TM to detect unused tables and columns)

⁃   The application of TM to semantic modeling

Yasushi Kiyama


He joined Ajinomoto Co., Inc., a globally operating leader in amino acid manufacturing, as a new graduate in sales and later served as a product manager and in business management. Moving into IT, he worked on projects including master data management, global SCM, and overseas SAP implementations. He also led initiatives such as building a Data Hub for KPI and supply chain data, introducing data management programs, and developing a data catalog.
After retiring from Ajinomoto, he became President of the DAMA Japan Chapter, where he promotes DAMA-DMBOK and brings leading international publications to Japan. He led the Japanese translations of DMBOK2R, Executing Data Quality Projects (Danette McGilvray), and Data Strategy for Data Governance (Marilu Lopez).

Yosuke Suzuki


Yosuke Suzuki is a manager and data modeler who leads the team that provides data management services at Fujitsu Limited. He has over 10 years of experience as a data modeler and has worked on projects in a variety of industries, including manufacturing, finance, distribution, and the public sector.
He is also the co-author of “DX Ready Core System Renovation Techniques (Japanese only),” which provides practical knowledge for renovating core systems incorporating data management activities. He specializes in visualizing and organizing complex business specifications from a data perspective through data modeling that is faithful to the client’s business terminology.

Data Products promise a way out of the persistent data challenges that have frustrated teams for years. But are they substance or just the latest hype? In this session, we cut through the noise and focus on what works. You’ll learn the essential ingredients of successful Data Products and how to manage their lifecycle from data retrieval all the way to measurable value generation. We’ll walk through the critical steps, highlight common pitfalls, and share practical guardrails you can apply immediately. The goal: help you move beyond prototypes and platforms to delivering repeatable, business-relevant outcomes.

Mario Meir-Huber is an AI & Data Leader and Data Architect known for turning complex ideas into practical, scalable solutions. He is the co-author of Handbook Data Science & AI, teaches at WU Vienna, and speaks at global conferences on data strategy and data products. As a LinkedIn Learning Instructor and trusted advisor, he has guided enterprises through transformation initiatives across Data and AI. Previously a Vice President & Head of Data and earlier with Microsoft, he combines executive experience with hands-on technical depth. He is currently publishing a new book on Data Products with the renowned publisher Technics Publications.

Aim, wind, and gravity influence an arrow’s trajectory, much the same way as deadlines, skills, and biases influence a data model’s trajectory, strongly impacting whether a model will reach its target of appropriately representing a business solution. The archer’s score can be quickly calculated and we can easily see the success or failure of her work. This is where the analogy ends however, because often we do not measure the strengths and weaknesses of our data models, leaving much up to interpretation, perception, and the test of time.

After years of reviewing hundreds of data models, I have formalized a set of data model quality criteria into what I call the Data Model Scorecard. The Scorecard contains all of the criteria for highlighting strengths and identifying areas for improvement in our designs. This session will provide an overview on the Data Model Scorecard.

You will learn to

  • Appreciate the need for an objective measure of data model quality.
  • Know at a high level, all ten categories.
  • See an example of a Scorecard template.

Steve Hoberman’s first word was “data”. He has been a data modeler for over 30 years, and thousands of business and data professionals have completed his Data Modeling Master Class. Steve is the author of 11 books on data modeling, including The Align > Refine > Design Series and Data Modeling Made Simple. Steve is also the author of Blockchainopoly. One of Steve’s frequent data modeling consulting assignments is to review data models using his Data Model Scorecard® technique. He is the founder of the Design Challenges group, creator of the Data Modeling Institute’s Data Modeling Certification exam, Conference Chair of the Data Modeling Zone conferences, director of Technics Publications, lecturer at Columbia University, and recipient of the Data Administration Management Association (DAMA) International Professional Achievement Award.

Intermediate/ Advanced Modeling and Case Studies

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

The Align > Refine > Design approach covers conceptual, logical, and physical data modeling (schema design and patterns), combining proven data modeling practices with database-specific features to produce better applications. Learn how to apply this approach when creating a MongoDB schema. Align is about agreeing on the common business vocabulary so everyone is aligned on terminology and general initiative scope. Refine is about capturing the business requirements. That is, refining our knowledge of the initiative to focus on what is essential. Design, is about the technical requirements. That is, designing to accommodate MongoDB’s powerful features and functions.

You will learn how to design effective and robust data models for MongoDB.

Daniel Coupal is a Staff Engineer at MongoDB. He built the Data Modeling class for MongoDB University. He also defined a methodology to develop for MongoDB and created a series of Schema Design Patterns to optimize Data Modeling for MongoDB and other NoSQL databases.

Organizations want to adopt smarter ways to manage, collaborate, and scale their data models. This session will explore the transformative benefits of model-driven metadata collaboration, showcasing how leveraging a shared understanding of metadata accelerates project delivery, improves cross-functional communication, and enhances data quality. Attendees will gain insights into the tools, best practices, and strategies that enable seamless collaboration across teams, from data architects to business analysts. Using the power of metadata models, teams can reduce errors, ensure alignment, and drive innovation. Whether you’re starting out with data modeling or seeking to optimize your existing processes, this session will provide actionable takeaways to enhance the impact and value of your data models across the enterprise.

Pascal Desmarets is the founder and CEO of Hackolade, a data modeling tool for NoSQL databases, storage formats, REST APIs, and JSON in RDBMS. Hackolade pioneered Polyglot Data Modeling, which is data modeling for polyglot data persistence and data exchanges. With Hackolade’s Metadata-as-Code strategy, data models are co-located with application code in Git repositories as they evolve and are published to business-facing data catalogs to ensure a shared understanding of the meaning and context of your data. Pascal is also an advocate of Domain-Driven Data Modeling.

As organizations scale their cloud data platforms, balancing agility with structured data governance becomes increasingly critical. This case study explores a practical implementation of logical and physical data modeling within the Databricks Lakehouse environment, using generative AI tools such as GitHub Copilot to accelerate and enhance the process.

I outline an iterative approach to model design that begins with logical models grounded in canonical enterprise concepts and progresses to physical implementations optimized for Delta Lake. The session highlights how AI-assisted tooling was used to profile diverse source datasets, identify artefact lineage and redundancies, generate query POCs for performance tuning, and even scaffold UI screens to elicit stakeholder requirements.

The use of Copilot extended beyond simple code generation: it enabled rapid hypothesis testing, automated pattern recognition in legacy systems, and supported documentation and versioning best practices in a collaborative DevOps environment. You will walk away with practical insights, architectural patterns, and governance strategies for accelerating enterprise data modeling in cloud-native environments without compromising on semantic integrity or reusability.

Rafid is a data modeler who entered the field at the young age of 22, holding an undergraduate degree in Biology and Mathematics from the University of Ottawa. He was inducted into the DMC Hall of Fame by the Data Modeling Institute in July 2020, making him the first Canadian and 10th person worldwide to receive this honor. Rafid possesses extensive experience in creating standardized financial data models and utilizing various modeling techniques to enhance data delivery mechanisms. He is well-versed in data analytics, having conducted in-depth analyses of Capital Markets, Retail Banking, and Insurance data using both relational and NoSQL models. As a speaker, Rafid shared his expertise at the 2021 Data Modeling Zone Europe conference, focusing on the reverse engineering of physical NoSQL data models into logical ones. Rafid and his team recently placed second in an annual AI-Hackathon, focusing on a credit card fraud detection problem. Alongside his professional pursuits, Rafid loves recording music and creating digital art, showcasing his creative mind and passion for innovation in data modeling.

Join this technical deep-dive to master Amazon DynamoDB data modeling
fundamentals and advanced strategies. Learn proven patterns for designing high-
performance, scalable NoSQL applications that handle enterprise workloads with consistent sub-millisecond latency. This session covers critical decision frameworks for single-table vs. multi-table architectures, strategic indexing approaches, and real-world trade-offs that impact both performance and cost.

Ideal for data architects, backend developers, and database professionals
building modern applications that demand predictable performance at scale. Walk away with actionable patterns you can immediately apply to optimize your DynamoDB implementations.

The session will introduce the new “Unified Star Schema 2.0”. This is not only an evolution of the “Unified Star Schema” published by Technics Publications in 2020, but also the foundation for a groundbreaking Semantic Layer. The primary concept of USS 2.0 is the dynamic creation of a Bridge Table tailored to each query generated by a user, either through drag-and-drop or via a textual request to an LLM.

You will learn:

  • The core concepts of the traditional Unified Star Schema (USS)
  • The advantages and disadvantages of the USS, in comparison to Kimball’s Dimensional Modeling.
  • A data modeling technique that allows both solutions to co-exist.
  • The innovations introduced in the USS 2.0
  • Why LLMs require a Semantic Layer.
  • Why the USS 2.0 and its Semantic Layer are perfect for text-to-SQL with LLMs


Francesco Puppini is an Italian freelance consultant in business intelligence and data warehousing. He is the inventor of the Unified Star Schema, which is also the title of a book that he wrote with Bill Inmon. He was always focusing on the “last mile challenge”: how to deliver information to business users. He is currently focusing on algorithms of graph theory applied to data modeling. He is also working on a framework of communication between LLMs and semantic layers. His ultimate goal is to put the foundations of a full experience of self-service access to information.

Conventional relational models can be less than ideal for processing streaming data and IoT. This session will describe the potential alternatives for real-time modelling using event driven models, temporal structures and schema evolution. We’ll also examine how emerging streams processing platforms such as Kafka are changing modelling challenges, and we will share some practical ways to think about modelling that you can take away and apply yourself, including governance, data contracts and semantic modelling of events to produce real-time analytics that are agile, scalable and trustable.
You will learn:

  • Too model real-time and streamed data within event-based design patterns, temporal structures and schema evolution.
  • How platforms such as Apache Kafka and cloud-native services shift the modelling from rigid ER diagrams into flexible schema based on events.
  • How to work with unbounded datasets and late data while maintaining Validity, operationally and analytically.
  • How to deal with governance lineage, and scale for real-time data modelling using, data contracts, schema registries, and semantic event mapping tools.
  • How to develop practical, real-time models that facilitate IoT, informing real time decisions, and providing AI-derived insights, all while not being expensive to re-engineer.

Thembeka Snethemba Jiyane is a South African Data Modeler and passionate Youth Empowerment Advocate. She holds a BSc in Computer Science and Mathematics and an Honours BSc in Information Technology, with extensive expertise in data warehousing, integration, and data modelling strategies.

Recognized as a finalist and award recipient for the Young Data Professional of the Year 2025 at DataFest South Africa, Thembeka continues to make her mark in the data community. She has presented at international platforms such as Data Modelling Zone and DataFest, where she shares thought-provoking insights on the future of data modelling and the evolving role of data professionals.

Beyond her professional achievements, Thembeka is the founder and director of Blessed Child Empowerment Projects (BCEP), a non-profit organization dedicated to mentorship, financial literacy, and STEM education for young people. Her work bridges the worlds of technology, education, and empowerment, inspiring the next generation to lead with purpose and innovation.

Sewela Sathekge-Ramakaba is a seasoned and skilled data management professional, with over 10 years of experience gained from hands-on experience in the telecommunications, insurance and banking sectors. Sewela holds a BSc in Computer Science and is currently pursuing a BSc Honours in Information Technology.

Sewela is a DAMA Certified Data Management Professional Associate (CDMP) with a Master’s badge in Data Modelling, which demonstrates her advanced understanding of structuring, governing and using enterprise data assets. She has enabled meaningful data solutions throughout her career which has assisted organisations in making better decisions, ensuring compliance and driving business value through data management.

Combining strong technical and strategic data management expertise, Sewela is committed to driving best practice within the industry to enhance data enterprise performance.

Data Strategy/ CDMP

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

The DAMA-DMBOK® is the global reference for Data Management—and it’s evolving. In this interactive session, we’ll share the progress and direction of DAMA-DMBOK® 3.0, highlighting key updates, new perspectives, and how the framework is adapting to emerging trends like AI, cloud, and data ethics. Participants will gain behind-the-scenes insight into the editorial process, including lessons learned from global stakeholder engagement. The session will also serve as a mini-workshop, inviting attendees to contribute ideas, identify challenges, and help shape the next edition. Whether you’re a seasoned practitioner or new to the DMBOK®, you’ll leave with a clear view of where the standard is headed—and how you can be part of it.

You will learn:

  • The latest developments and themes in DAMA-DMBOK® 3.0.
  • How global feedback and industry trends are shaping the update.
  • Opportunities to contribute to the framework’s evolution.
  • Insights into the editorial and governance process behind the standard.

Mathias is a trailblazer in the world of data governance. With over a decade of experience, Mathias has been instrumental in transforming organizations by implementing robust data governance frameworks that actually work and are being adopted. As the president and principal of Data Vantage Consulting, Mathias works closely with top executives across multiple industries, accelerating their journey toward efficient data governance. At the moment, Mathias serves as the Project Manager and Technical Writer for the DAMA-DMBOK® 3.0 initiative.

Boasting an impressive portfolio of successful projects, Mathias has proven time and again that he can turn even the most chaotic data landscapes into organized and efficient systems. More than just implementing data governance, Mathias trains teams, fostering a culture of data literacy and ownership that lasts long after his work is done.

He’s a thought leader who’s consistently pushing boundaries to explore new ways of leveraging data for business success. His innovative approach to data governance is rooted in his belief that data, when governed effectively, can be a powerful tool! 

Data models define how information connects, but true business value comes from how effectively an organization can act on those connections. This session explores a practical approach to operationalizing data maturity—bridging the gap between sound data modeling practices and measurable transformation outcomes.

We’ll examine how teams can assess their current capabilities, identify maturity levers, and build improvement roadmaps that evolve with business needs and align with organizational strategic goals. The session will also explain how the process benefits from AI to uncover hidden patterns, enhance decision-making, and accelerate maturity progress. Drawing on insights from organizations using structured, platform-based assessment approaches, attendees will gain actionable techniques to translate modeling insight into sustained business performance, supported by continuous learning and improvement.

You will learn:

  • How to evaluate and benchmark data maturity across strategy, governance, and architecture.
  • Methods for linking modeling and design work to organizational readiness.
  • How to ensure data initiatives align with broader organizational strategic goals.
  • How AI can enhance assessment and decision processes in data strategy development.
  • Techniques for prioritizing improvement areas and building sustainable maturity roadmaps.

 

Ahmed Abbas is the Founder & CEO of DUNNIXER, a SaaS company digitizing the consulting industry through AI-powered maturity assessments. With over 25 years of experience in enterprise IT, software architecture, and data-driven transformation, Ahmed has led large-scale digital and data architecture initiatives at IBM and EY-Parthenon across the Middle East, Europe, and North America. A Distinguished Certified Architect accredited by The Open Group, and a patented inventor, he brings a unique perspective on integrating AI, data modeling, and enterprise architecture to drive measurable business outcomes and accelerate digital transformation.

Does anyone remember Orville and Wilbur Wright? How about Albert Einstein or Isaac Newton? How about Jonas Salk and Michael Debakey? Other professions pay homage to their pioneers. The IT profession buries its pioneers in an unmarked grave and erases their names from all literature before the body even gets to be cold.

This presentation takes a look at answering the question – the computer, technology – how did all of this happen and who made it happen.

Some interesting questions – who was Gene Amdahl and what did he do? How about Ed Yourdon? Grace Hopper? John Zachman? Alan Turing? Charles Babbage? Why did Navajo women play an important role in early computer technology? Where did venture capitalism start? Where did the Hollerith punch card come from? What was the calculation machine and where is it today?

Bill Inmon, the “father of the data warehouse,” has written 60 books published in nine languages. ComputerWorld named Bill one of the ten most influential people in the history of the computer profession. Bill’s latest adventure is the building of technology known as textual disambiguation.

There are two thoughts of school when it comes to applicational development: “We use relational because it allows proper data modeling and is use-case flexible” versus “We use JSON documents because it’s simple and flexible.”
Oracle’s latest database, 23ai, offers a new paradigm: JSON Relational Duality, where data is both JSON and Tables and can be accessed with Document APIs and SQL depending on the use-case. This sessions explains the concepts behind a new technology that combines the best of JSON and relational.
You will learn:

  • The differences, strengths, and weaknesses of JSON versus relational.
  • How a combined model is possible and how it simplifies application design and evolution.
  • How SQL and NoSQL are no longer technology choices but just different means to work with the same data in the same system

Beda Hammerschmidt studied computer science and later earned a PhD in indexing XML data. He joined Oracle as a software developer in 2006. He initiated the support for JSON in Oracle and is co-author of the SQL/JSON standard. Beda is currently managing groups supporting semi-structured data in Oracle (JSON, XML, Full Text, etc).

Why? Whenever you need to understand or define data, thinking semantically will help. Whenever you need to integrate data, thinking semantically will assist and simplify your process.

How? Data modelers should categorize entities, relationships and attributes at a higher level of abstraction. Ontologists should ensure classes and object properties are subclasses of higher-level classes and object properties, and data properties utilize common ranges.

You will learn:

  • How to think semantically about what data modelers call entities, relationships, and attributes and what ontologists call classes and object properties.
  • How taxonomies help in understanding data and relationships
  • Why thinking semantically will assist in your data integration projects
  • How to understand the world better by recognizing commonalities.

Norman Daoust founded his consulting company Daoust Associates in 2001. He became addicted to modeling as a result of his numerous healthcare data integration projects. He was a long-time contributor to the healthcare industry standard data model Health Level Seven Reference Information Model (RIM). He sees patterns in both data model entities and their relationships. Norman enjoys training and making complex ideas easy to understand.

Data-driven AI

By 2026, autonomous AI agents will be transforming how enterprises operate—moving beyond automation into systems that can reason, plan, act, and collaborate at scale. This session, based on the newly published book AI Agents at Work: The Agentic Revolution in Industry, unpacks the architectures, orchestration frameworks, and governance models that make agentic AI practical and enterprise-ready.

  • You will learn:
    When and where agents outperform traditional AI — frameworks to identify use cases where reasoning, planning, and collaboration unlock greater value.
  • Enterprise-ready architectures for multi-agent systems — design patterns, orchestration frameworks (LangGraph, CrewAI, AutoGen), and integration strategies.
  • Governance and oversight models — techniques to ensure agent performance, compliance, and accountability without stifling autonomy.
  • Deployment blueprints and ROI frameworks — how to operationalize agent systems at scale while tracking efficiency, accuracy, and cost optimization.
  • Industry case studies and lessons learned — real-world examples across finance, healthcare, cybersecurity, and supply chain where agentic AI is already delivering measurable impact.

Kinshuk Dutta is a visionary technology leader with over 18 years of experience in Data Management, Business Integration, and Autonomous Endpoint Management. Currently Director of Product Enablement at Tanium Inc., Kinshuk has a strong record of leading global teams in Pre Sales, Customer Success, Product R&D, and Product Enablement. His career has been rooted in Data and AI, specializing in delivering sophisticated solutions to solve complex enterprise problems, accelerating sales cycles, and driving customer adoption. 

By 2026, autonomous AI agents will be transforming how enterprises operate—moving beyond automation into systems that can reason, plan, act, and collaborate at scale. This session, based on the newly published book AI Agents at Work: The Agentic Revolution in Industry, unpacks the architectures, orchestration frameworks, and governance models that make agentic AI practical and enterprise-ready.

  • You will learn:
    When and where agents outperform traditional AI — frameworks to identify use cases where reasoning, planning, and collaboration unlock greater value.
  • Enterprise-ready architectures for multi-agent systems — design patterns, orchestration frameworks (LangGraph, CrewAI, AutoGen), and integration strategies.
  • Governance and oversight models — techniques to ensure agent performance, compliance, and accountability without stifling autonomy.
  • Deployment blueprints and ROI frameworks — how to operationalize agent systems at scale while tracking efficiency, accuracy, and cost optimization.
  • Industry case studies and lessons learned — real-world examples across finance, healthcare, cybersecurity, and supply chain where agentic AI is already delivering measurable impact.

Kinshuk Dutta is a visionary technology leader with over 18 years of experience in Data Management, Business Integration, and Autonomous Endpoint Management. Currently Director of Product Enablement at Tanium Inc., Kinshuk has a strong record of leading global teams in Pre Sales, Customer Success, Product R&D, and Product Enablement. His career has been rooted in Data and AI, specializing in delivering sophisticated solutions to solve complex enterprise problems, accelerating sales cycles, and driving customer adoption. 

Data modeling is an essential data management skill set that has an important (but frequently unrecognized) role in artificial intelligence. AI models depend on data. Discriminative models classify existing data and use it to infer predictions and conclusions. Generative models create new data that is collected, stored, managed, and used as feedback. 

Data models have roles in every phase of the AI lifecycle. Data modeling provides techniques to organize, understand, prepare, and manage data for AI. Data models provide business context, describe data content and organization, support feature engineering and data preparation, and reinforce model interpretability. Attend this session to learn:

  • Six phases of the AI Lifecycle and the activities of each phase
  • Where and how Data Modeling fits into the AI Lifecycle
  • The roles of data models in AI Governance and Explainable AI
  • Future modeling considerations for agentic AI

Dave Wells is a data management consultant and educator with experience across a broad spectrum of data management processes and practices. As a consultant he provides advice, direction, and guidance for data architecture, data quality, data governance, data integration, and data interoperability. As an educator, he is the Director of Education and an instructor at eLearningCurve and instructor of a variety of courses at Dataversity. Several decades of information systems, data management, and business management experience give Dave a well-balanced perspective about the synergies of business, information, data, and technology. Knowledge sharing and skills building are Dave’s passions, carried out through consulting, speaking, teaching, and writing. 

How can we use the niche technologies and power of LLMs and Gen AI to automate data modeling? Can we feed business requirements with correct prompt engineering to enable architects to use LLMs to create domain, logical and physical models? Can it help recommend entities, attributes and their definitions? How can Gen AI be used to integrate company specific guidelines into these models? Can it prompt on potential attributes that may contain PII data? Can this further extent into testing the performance of query patterns on these tables and suggest improvements before this over to development team use? Please join us to see how we can unlock the potential of LLMs and Gen AI as data architects.

Eve Danoff is an Engineering Director in Next Generation Data Management at American Express. With over 30 years of experience in data architecture and database design, she collaborates closely with critical platforms across Technology organizations, inspiring her team to apply a consistent and scalable approach to building and managing innovative domain and database design solutions. 

As organizations race to integrate AI, governance often lags behind, creating risk and uncertainty. This keynote explores how to build a dynamic AI governance framework that evolves with technology, aligns with other methods of governance, manages ethical and regulatory expectations, and drives measurable results. Attendees will gain practical insights into moving from fragmented oversight to a coherent strategy that fosters innovation, accountability, and trust.  

Laura Madsen is a global data strategist, keynote speaker, and author.  She advises data leaders in healthcare, government, manufacturing, and tech.  Laura has spoken at hundreds of conferences and events inspiring organizations and individuals alike with her iconoclastic disrupter mentality.  

As AI adoption accelerates, organizations face mounting pressure to ensure these systems are ethical, transparent, and compliant. This session demystifies AI Governance and Responsible AI, showing how Data Governance is the essential backbone for trustworthy AI.

We’ll explore practical frameworks that integrate policy, ethics, and data management to reduce risk and build trust. You’ll discover how to operationalize governance across AI lifecycle stages—from data sourcing to model deployment. By connecting these governance disciplines, participants will leave with actionable strategies for implementing ethical, effective AI.

You will learn:

  • The core principles of AI Governance and Responsible AI.
  • How Data Governance underpins ethical and compliant AI practices.
  • Strategies for integrating AI and Data Governance frameworks into business operations.
  •  Practical tools for transparency, accountability, and fairness in AI systems.

Mathias is a trailblazer in the world of data governance. With over a decade of experience, Mathias has been instrumental in transforming organizations by implementing robust data governance frameworks that actually work and are being adopted. As the president and principal of Data Vantage Consulting, Mathias works closely with top executives across multiple industries, accelerating their journey toward efficient data governance. At the moment, Mathias serves as the Project Manager and Technical Writer for the DAMA-DMBOK® 3.0 initiative.

Boasting an impressive portfolio of successful projects, Mathias has proven time and again that he can turn even the most chaotic data landscapes into organized and efficient systems. More than just implementing data governance, Mathias trains teams, fostering a culture of data literacy and ownership that lasts long after his work is done.

He’s a thought leader who’s consistently pushing boundaries to explore new ways of leveraging data for business success. His innovative approach to data governance is rooted in his belief that data, when governed effectively, can be a powerful tool! 

Communication

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

In this workshop, participants will learn best practices for presentation in a variety of professional contexts.  We will focus on organization, audience analysis, and visual presentation (slide deck) to best communicate your data modeling pitch. 

Christina Sabee (PhD) is a Professor and Department Chair of the Communication Studies department at San Francisco State University, and Board of Directors President for Community Boards of San Francisco.  She has published and taught in the Communication Studies field for over 25 years, with a focus on strategic interpersonal communication.  Christina uses a rigorous academic focus along with practical skills work in her training sessions that help participants engage more effectively immediately after their workshop. 

Would you like the power to speak before any audience in the world with limited preparation and maximum success? Surely you do, because today’s global marketplace demands an ability to convey your ideas in a manner that is clear, concise, correct, and compelling. Whether you are pitching a product, building a brand, negotiating a deal, launching a campaign, or coordinating a team, your ability to speak with conviction, to persuade with precision, may mark the difference between success and failure.

No doubt, you can remember amazing TED talks, inspiring political speeches, or clever advertising appeals when you thought, “I wish I knew the secret to moving an audience with such ease!” You might have dismissed your dreams of becoming a more effective presenter by thinking that such skill demands an innate talent you could never possess. Even so, most experts in the field of communication will tell you that excellence in presentations is not a matter of some unchanging “trait” but rather a matter of intentional “state”: a state of mind and behavior that can be learned, practiced, and improved. This workshop will help you attain that state that leads to success.

Secrets of Effective Delivery

  • Eye Contact: This section summarizes three eye contact mistakes to avoid – searchlight, tunnel vision, and bouncy ball – before introducing the principle of “one idea per person.”
  • Vocalics: This section summarizes the dimensions of volume, pitch, and emphasis before addressing the importance of variety in oral communication.
  • Gestures: This section introduces three aspects of effective gesturing: relaxed, not robotic; open, not closed; and invitational, not defensive.
  • Platform Movement: This section introduces three benefits of platform movement: demonstrate organization, increase interaction, and release nervous energy.

Participants engage in practical small-group feedback activities related to each of these four dimensions of effective delivery.

Andrew F. Wood (Ph.D, 1998, Ohio University) is a Professor and Chair of the Department of Communication Studies at San José State University in California. After completing his doctoral training at Ohio University, Dr. Wood began an academic and consulting career that has taken him around the world. He has delivered courses and presentations in Argentina, Austria, China, Finland, Germany, Mexico, and Slovenia. As a Fulbright Scholar, he also taught for a semester in Belarus. Dr. Wood’s scholarship into the rhetorics of modernity has brought him to some fascinating places, studying “Ghost Towns” in the United States, practices of “Dark Tourism” in Chernobyl, monumental architecture in North Korea, and the oddly thrilling pleasures of an industrial disaster site in Turkmenistan called the “Doorway to Hell.” His most recent book is entitled A Rhetoric of Ruins: Exploring Landscapes of Abandoned Modernity. With over a quarter-century of experience in coaching, consulting, and teaching, Dr. Wood has developed a rich assortment of practical strategies designed to help professionals and students enhance their intercultural and communication competence.

In this workshop, participants will learn how to respond to questions, comments, and criticism in real time.  We will focus on techniques that help you confidently present your perspective while simultaneously keeping your calm.  Additionally, we will work through the best ways to respond when you don’t know the answer or when you are getting more feedback than is welcome. 

Christina Sabee (PhD) is a Professor and Department Chair of the Communication Studies department at San Francisco State University, and Board of Directors President for Community Boards of San Francisco.  She has published and taught in the Communication Studies field for over 25 years, with a focus on strategic interpersonal communication.  Christina uses a rigorous academic focus along with practical skills work in her training sessions that help participants engage more effectively immediately after their workshop. 

In this workshop, participants will learn how to manage conflicts with co-workers in a productive way.  We will practice a few key strategies that can be applied in most conflicts that help de-escalate and move toward a solution that works for everyone. 

Christina Sabee (PhD) is a Professor and Department Chair of the Communication Studies department at San Francisco State University, and Board of Directors President for Community Boards of San Francisco.  She has published and taught in the Communication Studies field for over 25 years, with a focus on strategic interpersonal communication.  Christina uses a rigorous academic focus along with practical skills work in her training sessions that help participants engage more effectively immediately after their workshop. 

Have you ever felt like an outsider trying to “read the room” in a new team, company, or culture? This hands-on workshop will help you decode unspoken norms and rules around communication. You’ll learn to identify and interpret cognitive scripts—the mental blueprints that guide how people act, talk, and collaborate in different settings. Grounded in an accessible approach called the ethnography of communication, this session will give you practical tools to analyze how communication really works across cultural and organizational boundaries.

Through engaging hands-on activities and discussions, you’ll practice spotting patterns, understanding “script violations,” and adapting your own communication style to navigate new environments with insight and confidence. If you want to communicate more effectively in diverse teams and global contexts, this workshop is for you.

Dr. Tabitha Hart holds three degrees in communication: a B.A. from the University of California, San Diego; an M.A. from California State University, Sacramento, and a Ph.D. from the University of Washington. Her research focuses on culture and communication in organizational settings. Recent publications include an innovative workbook on how to use ethnographic methods to explore cultures firsthand (Exploring Cultural Communication from the Inside Out: An Ethnographic Toolkit) and an edited volume presenting the newest version of speech codes theory (Contending with Codes In a World Of Difference: Transforming a Theory of Human Communication). Dr. Hart is a Professor in the Department of Communication Studies at San Jose State University. 

In this talk I will share observations about the role of language in visualization and in other modes of visual expression. I will pose questions such as: how do we decide what to express via language vs via visuals? How do we choose what kind of text to use when creating visualizations, and does that choice matter? Does anyone prefer text over visuals, under what circumstances, and why? Why is visualization ineffective for expressing text content? And what are the ramifications for this combination given the success of Generative AI at creating and analyzing multimodal data?

Dr. Marti Hearst a Professor in the UC Berkeley School of Information and the Computer Science Division. She was Interim Dean and Head of School for the I School from 2022-2024. Her research encompasses user interfaces with a focus on search, information visualization with a focus on text, computational linguistics, and educational technology. She is the author of Search User Interfaces, the first academic book on that topic. She co-founded the ACM Learning@Scale conference, is a former President of the Association for Computational Linguistics, a member of the CHI Academy and the SIGIR Academy, an ACM Fellow, an ACL Fellow, and has received four Excellence in Teaching Awards from the students of UC Berkeley. She received her PhD, MS, and BA degrees in Computer Science from UC Berkeley and was a member of the research staff at Xerox PARC.

Keynotes

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

AI is used everywhere, yet organizations still struggle to generate actual business value with it. The reason is that the focus is too much on technological capabilities, and not enough on the bigger picture around it such as strategic, cultural and prose-related aspects. Tiankai introduces in this talk the FOREST framework, to provide a structured lens on the key factors for successfully turning AI hype into scalable value. 

Tiankai Feng is a Data and AI leader by day, a musician by night, and an optimist at heart. His experiences span marketing analytics, business performance management, data product ownership, capability leadership, data governance, data strategy, and AI transformation. Working at TD Reply, adidas, and Thoughtworks allowed him to experience data and AI challenges from both consultant and client perspectives, helping him identify patterns in what works and what spectacularly doesn’t. Author of Humanizing Data Strategy, TEDx speaker, and frequent keynote presenter, Tiankai strongly believes in keeping humans at the center of our AI future. He often uses humor, music, and perfectly timed memes to make AI less intimidating and more approachable—because if we’re going to work with machines that sound human, we might as well have some fun with it.

Early-morning Sessions

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

 

Laura Madsen is a global data strategist, keynote speaker, and author.  She advises data leaders in healthcare, government, manufacturing, and tech.  Laura has spoken at hundreds of conferences and events inspiring organizations and individuals alike with her iconoclastic disrupter mentality.  Laura is a co-founder and partner in a Minneapolis-based consulting firm Moxy Analytics to converge two of her biggest passions: helping companies define and execute successful data strategies and radically challenging the status quo.

 

Post-conference Workshops

Foundational Modeling Skills

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

The Greedy Gus Gold Company specializes in prospecting and mining placer deposits of gold, silver, and other valuable minerals on different available lands. Unearthing the complex data related to land ownership, mineral assays, extraction production, partner compensation, and regulatory requirements presents a data modeling challenge—one that will have you exploring solutions like a seasoned prospector.

In this hands-on workshop, you’ll strike gold by teaming up with fellow data modelers to build a logical data model for Greedy Gus’s next-generation business management system. Your database solution will support tracking of mineral locations, lab assay reports, extraction and refining processes, and partner agreements.

Session facilitators will play the role of Greedy Gus co-managers, providing business insights and helping you dig into your probing questions to pan out data nuggets. Given our limited time, we’ll adopt a “cooking show” approach—some of the data requirements (ingredients) will be pre-prepared and measured, so your group can focus on modeling the complexities unique to prospecting. At the end, we’ll host a “tasting” to showcase your group’s mined treasured model.

Data modeling tool vendors will help lead groups’ assigned portions of logical data modeling. You are also welcome to bring your own favorite excavating data modeling tool. You’ll be encouraged to use Generative AI as a trusty prospecting pan, assisting in data modeling. This workshop promises a trove of valuable, real-world experience—an opportunity to deepen your expertise, collaborate with peers, and hit pay dirt with industry tools.

Kiranmai Mandali graduated from The University of Texas at Dallas in Information Technology and Systems and hold certifications in ITIL and Scrum Master. She has experience in regulatory consulting and business analysis within the insurance and consulting world. Currently a Data Designer/Administrator at State Farm, Kiranmai designs enterprise data models, drives initiatives to improve data integrity, and mentors’ data designers on best practices. She’s passionate about bridging business needs with scalable, compliant, and future ready data solutions. Aside from work, she loves traveling and exchanging cultural experiences.

Steve Sewell graduated from Illinois State University in Business Data Processing, where he gained expertise in various programming languages, requirements gathering, and normalized database design. With a career spanning over three decades in the insurance industry, Steve has excelled in many roles including his most recent as a Senior Data Designer at State Farm.  His current work involves providing strategic guidance for enterprise-wide initiatives involving large-scale Postgres and AWS implementations, while adhering to best practices in database design. Steve is actively involved in imparting new Data Designers with the knowledge of data modeling best practices.

Intermediate/ Advanced Modeling and Case Studies

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

The Unified Star Schema (USS) is a data modeling technique that generalizes the traditional “Dimensional Modeling” of Kimball. With the USS, it is possible to create a “multi-fact” environment that can be used as a self-service platform for business users. This is normally impossible with Dimensional Modeling, because, with Dimensional Modeling, every multi-fact query needs to be created by a data expert. Business Users don’t know SQL, and they don’t know data modeling either. This is why they need the USS!

We will initially build a USS based on an Excel data source, small, but complex: six Fact Tables. The process will be manual, and the entire Bridge table will be visible in front of our eyes. Later, we will build a second USS, based on a large and complex data source. We will analyze the metadata, and we will see the code in action, populating a USS in front of our eyes. Finally, we will experience the self-service environment for business users.

You will learn:

  • How to prepare the metadata that acts as a foundation for the entire process
  • How to recognize the “soft numbers”
  • How o build the USS Module tables
  • How to build a Puppini Bridge
  • How to be “backwards compatible” with Dimensional Modeling
  • How to consume a Multi-Fact self-service environment for business users
  • Why in the USS the Fact-To-Fact never produces a Cartesian product
  • Why the USS can be built also with a 3NF data source
  • BONUS: You will also learn what problems the USS has not solved yet


Francesco Puppini is an Italian freelance consultant in business intelligence and data warehousing. He is the inventor of the Unified Star Schema, which is also the title of a book that he wrote with Bill Inmon. He was always focusing on the “last mile challenge”: how to deliver information to business users. He is currently focusing on algorithms of graph theory applied to data modeling. He is also working on a framework of communication between LLMs and semantic layers. His ultimate goal is to put the foundations of a full experience of self-service access to information.

Data Strategy/ CDMP

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

Unlock the potential of your data management career with the Certified Data Management Professional (CDMP) program by DAMA International. As the global leader in Data Management, DAMA empowers professionals like you to acquire the skills, knowledge, and recognition necessary to thrive in today’s data-driven world. Whether you’re a seasoned data professional or an aspiring Data Management expert, the CDMP certification sets you apart, validating your expertise and opening doors to new career opportunities.

CDMP is recognized worldwide as the gold standard for Data Management professionals. Employers around the globe trust and seek out CDMP-certified individuals, making it an essential credential for career advancement.

All CDMP certification levels require approving the Data Management Fundamental exam. This workshop is aimed at letting you know what to expect when taking the exam and how to define your best strategy to answer it. It is not intended to teach you Data Management but introduce you to CDMP and to do a brief review of the most relevant topics to keep in mind. After our break for lunch, you will have the opportunity to take the exam in its modality of PIYP (Pay If You Pass)!

You can take any CDMP exams in the modality of PIYP (Pay if you Pass), which is a great opportunity.

Those registered to this workshop will get an Event CODE to purchase the CDMP exam with no charge before taking the exam. The Event CODE will be emailed along with instructions to enroll in the exam. Once this is done, access to the Practice Exam is available, and strongly recommended to execute it as many times as possible before the exam.

Considerations:

  • You will receive instructions to enroll to the CDMP exam in PIYP basis
  • PIYP means that if you approve the exam (all exams are approved by getting 60% of correct answers), you must pay it (US$300.00) before leaving the room, so be ready with your credit card. If you are expecting a score equal to or above 70 and you get 69, you still must pay for the exam.
  • You must bring your own personal device (laptop, not tablet nor mobile phone) with Chrome navigator.
  • Job laptops are not recommended as they might have firewalls that will not allow you to enter the exam platform.
  • If English is not your primary language, you must indicate so when receiving the Workshop instructions by email, as this will allow you to have 20 more minutes to solve the exam (regular time is 90 minutes)
  • All the specialty exams will be available.

If you are interested in taking this workshop, please complete this form to receive your Event CODE and to secure a spot to take the exam.

Data-driven AI

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

For decades, enterprise data architecture has relied on a familiar pattern: when systems need to share data, we copy it. Extract, transform, and load has fueled data warehouses, operational data stores, data lakes, and countless point-to-point flows. This copy-based model has enabled reporting, dashboards, BI, and analytics—but at a cost. With every new system, copy, and schema change, the complexity grows. Technical debt accumulates, integrations become brittle, and adapting to change takes more time, money, and risk than organizations can afford.

Interoperability offers a different path. Instead of replicating data across systems, interoperability enables systems to communicate meaningfully where the data already resides. It is about shared understanding, not just shared storage. By applying the same principles that allow diverse software to integrate—common protocols, semantic alignment, and well-defined contracts—data interoperability reduces redundancy, strengthens resilience, and simplifies architecture. The result is operational agility, analytical clarity, and a data ecosystem built to adapt rather than break under change.

You Will Learn

  • Why interoperability is essential in modern data ecosystems
  • The operational data implications—reducing redundancy, complexity, and fragility
  • The analytical data implications—expanding scope, trust, and speed of insight
  • How interoperability reshapes data management architecture for resilience and agility
  • Practical considerations to apply interoperability principles in your organization

Dave Wells is a data management consultant and educator with experience across a broad spectrum of data management processes and practices. As a consultant he provides advice, direction, and guidance for data architecture, data quality, data governance, data integration, and data interoperability. As an educator, he is the Director of Education and an instructor at eLearningCurve and instructor of a variety of courses at Dataversity. Several decades of information systems, data management, and business management experience give Dave a well-balanced perspective about the synergies of business, information, data, and technology. Knowledge sharing and skills building are Dave’s passions, carried out through consulting, speaking, teaching, and writing. 

Communication

DataOps, GitOps, and Docker containers are changing the role of Data Modeling, now at the center of end-to-end metadata management.
Success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain interoperability of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. This process can only work at scale if it is automated.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures, complete with context and meaning.

In this session, we will review the options available to create the foundations for a data management framework providing architectural lineage and curation of metadata management.

Data alone doesn’t drive change. Stories do. For data professionals, the ability to translate complex information into compelling narratives is essential for inspiring action, aligning teams, and engaging stakeholders. This interactive workshop introduces participants to the core elements of high-impact storytelling and offers practical frameworks for reframing content with clarity and emotional resonance. Through a mix of collaborative exercises and guided reflection, attendees will learn how to connect data to human experience, tailor messages to diverse audiences, and craft narratives that stick.

Austin Meyer is an award-winning filmmaker, educator, and member of Only People, a learning experience design studio inspired by the art & activism of John Lennon & Yoko Ono. Through his work, Meyer crafts stories and interactive learning experiences that change the way people walk through the world by inspiring empathy, curiosity, and wonder. He does through a unique lens that blends journalistic rigor and ethics alongside a spirit of play and improvisation.

As a documentary filmmaker, Meyer’s work has been featured by HBO, Hulu, Apple TV, The New York Times, National Geographic, and The Washington Post among others. He has also worked with organizations such as The United Nations, Stanford University, The North Face, and JP Morgan Chase. 

Meyer has received recognition from various outlets for his documentary work. As the winner of the New York Times’ International Reporting Fellowship with Pulitzer Prize winner Nicholas Kristof, Meyer documented the opioid crisis in the US, malnutrition in India, and human trafficking in Nepal. As a recipient of the Level 1 Grant from the National Geographic Society, Meyer is also a National Geographic Explorer. His work for National Geographic has spanned continents and subject matter, from maternal healthcare in Sub-Saharan Africa, to the refugee crisis in the Middle East, wildfire disasters in his hometown of Santa Rosa, California, and animal exploitation in the industrial food system. 

Beyond the camera, Meyer is a professional theatrical improviser. Over the past decade he has taught hundreds of workshops on applied improv & storytelling to businesses, schools, and leaders around the world. Meyer holds a BA in creative writing and MA in journalism from Stanford University.

Seamus Yu Harte is the the Head of Learning Experience Design for the Electives Program at the Hasso Plattner Institute of Design (aka the d.school) and the founder of Only People, a learning experience design studio based inspired by the art & activism of John Lennon & Yoko Ono.

Prior to Stanford d.school & Only People, Seamus was the Senior Producer for The John Lennon Educational Tour Bus, Learning Experience Designer at Digital Media Academy and Creative Director and Director of Radical Experiments at Nearpod. Project-based learning & radical collaboration have been at the core of Seamus’ entire career.

His work at the Stanford d.school includes overseeing the design, development and delivery of over 30 elective courses at the Stanford d.school every academic year—all project-based, team taught, radical collaborations that amounts to over 1,000 Stanford students and nearly 150 Faculty & Lectures in the d.school teaching community.

He currently co-teaches a course titled: How to Shoot for the Moon, a radical collaboration between the Stanford d.school, described as a “kaleidoscope of curriculum inspired by the science and art of space exploration to help students discover who they are, why they’re here, where they want go and how to experiment towards getting there.”

From Yoko Ono to David Kelley, Seamus has had the opportunity to teach and learn with world-class creatives. He holds a BS in Sound Design from SAE and a MFA in Documentary Film + Video from Stanford University where he also received Fellowships from The Stanford Institute for Creativity and the Arts (SiCA) and The San Francisco Foundation.

ORGANIZATION

Only People is a network of experts designed to help individuals, teams and organizations imagine, make and champion social change. Our methods are inspired by the life and legacy of John Lennon and Yoko Ono and informed by the science, research, and art of teaching and learning at Stanford University. In a nutshell: Only People helps people remix how the(ir) world works.

DMZ sponsors

Platinum

Gold

Silver

Sponsor DMZ US 2026

If you are interested in sponsoring DMZ US 2026, in Redwood City California, March 3-5, please complete this form and we will send you the sponsorship package.

Lock in the lowest prices today - Prices increase as tickets sell

Original price was: $2,495.00.Current price is: $1,495.00.

Although there are no refunds, substitutions can be made without cost. For every three that register from the same organization, the fourth person is free!

Low prices for everyone - lower prices for teams and students

We keep costs low without sacrificing quality, and can pass on this savings with lower registration prices. Similar to the airline industry, however, as seats fill prices will rise. As people register for our event, the price of the tickets will go up. So, the least expensive ticket price will be today! If you would like to register your entire team and combine DMZ with a in-person team-building event, complete this form and we will contact you within 24 hours with discounted prices. If you are a student or work full-time for a not-for-profit organization, please complete this form and we will contact you within 24 hours with discounted prices. 

Location and hotels

The Oracle Conference Center (350 Oracle Pkwy, Redwood City, CA 94065) is just 15 minutes from San Francisco International Airport, 25 minutes from Oakland International, and 25 minutes from San Jose Mineta International Airport. 

Hyatt House

At the discounted rate of $169/night for the 1-bedroom suite and $199/night for the 2-bedroom suite, Hyatt House is at 400 Concourse Drive, Belmont, CA 94002. It is about a ten-minute walk to the conference center. Parking is $7/day at the hotel and breakfast is included. You can call the hotel directly at (650) 591 8600 and ask for the Data Modeling Zone (DMZ) rate, or book directly online by clicking here.

Other Hotels Nearby (all within a 20 minute walk)

Lock in the lowest prices today! Prices increase as tickets sell.

Although there are no refunds, substitutions can be made without cost. For every three that register from the same organization, the fourth person is free!

Original price was: $2,495.00.Current price is: $1,495.00.

Test your DMZ knowledge: Game #1

We have had over 500 speakers at our conferences since 2012. Do you know who the keynote speakers are and when they spoke? Take a guess and roll your mouse over the picture to see if you are right!