Data Modeling Made Simple, by Steve Hoberman
Data Modeling Made Simple will provide the business or IT professional with a practical working knowledge of data modeling concepts and best practices.
Data model explained
Fun with ice cream
Fun with business cards
Exercise 1: educating your neighbor
Communicating during the modeling process
Communicating after the modeling process
Data model uses
Exercise 2: converting the non-believer
The data model and the camera
Exercise 3: choosing the right setting
Exercise 4: defining concepts
Exercise 5: assigning domains
Exercise 6: reading a model
Candidate key (primary and alternate) explained
Surrogate key explained
Foreign key explained
Secondary key explained
Exercise 7: clarifying customer id
Conceptual data model explained
Relational and dimensional conceptual data models
Relational cdm example
Dimensional cdm example
Creating a conceptual data model
Step 1: ask the five strategic questions
Step 2: identify and define the concepts
Step 3: capture the relationships
Step 4: determine the most useful form
Step 5: review and confirm
Exercise 8: building a cdm
Logical data model explained
Relational and dimensional logical data models
Relational ldm example
Dimensional ldm example
Creating a relational logical data model
Creating a dimensional logical data model
Exercise 9: modifying a logical data model
Physical data model explained
Relational and dimensional physical data models
Exercise 10: getting physical with subtypes
Exercise 11: building the template
Data model scorecard explained
Exercise 12: determining the most challenging scorecard category
Recognizing people issues
Identifying the stakeholders
Asking key questions
Packaging it up
Staying on track
Following good practices
Dealing with problems – and problem people
Exercise 13: keeping a diary
Unstructured data explained
Data modeling and abstraction
Immutable unstructured data
Exercise 14: looking for a taxonomy
Class model explained
Use case model explained
Exercise 15: creating a use case
1. What is metadata?
2. How do you quantify the value of the logical data model?
3. Where does xml fit?
4. Where does agile fit?
5. How do i keep my modeling skills sharp?
This book is written in a conversational style that encourages you to read it from start to finish and master these ten objectives:
1. Know when a data model is needed and which type of data model is most effective for each situation
2. Read a data model of any size and complexity with the same confidence as reading a book
3. Build a fully normalized relational data model, as well as an easily navigatable dimensional model
4. Apply techniques to turn a logical data model into an efficient physical design
5. Leverage several templates to make requirements gathering more efficient and accurate
6. Explain all ten categories of the Data Model Scorecard
7. Learn strategies to improve your working relationships with others
8. Appreciate the impact unstructured data has, and will have, on our data modeling deliverables
9. Learn basic UML concepts
10. Put data modeling in context with XML, metadata, and agile development
Book Review by Johnny Gay
In this book review, I address each section in the book and provide what I found most valuable as a data modeler. I compare, as I go, how the book’s structure eases the new data modeler into the subject much like an instructor might ease a beginning swimmer into the pool.
This book begins like a Dan Brown novel. It even starts out with the protagonist, our favorite data modeler, lost on a dark road somewhere in France. In this case, what saves him isn’t a cipher, but of all things, something that’s very much like a data model in the form of a map! The author deems they are both way-finding tools.
The chapters in the book are divided into 5 sections. The chapters in each section end with an exercise and a list of the key points covered to reinforce what you’ve learned. I find myself comparing the teaching structure of the book to the way most of us learn to swim.
SECTION I: Data Modeling Introduction
The first section is like the shallow end of the pool, where as a beginning swimmer, you can dip your toes in to test the water. These easy chapters are short and concise. Here the author uses very common objects to describe what a data model is, and why it is so valuable. His first examples made excellent use of what’s truly a universal data model to millions of computer users – in school and business – the spreadsheet.
SECTION II: Data Model Components
In the second section, Steve Hoberman introduces you to the simplest components that make up a data model, and explains the important terms that we apply when we discuss them. By the end of section 2, you now have both feet comfortably in the water. You’re ready and eager to plunge deeper into the depths of this pool of data model knowledge.
SECTION III: Subject Area, Logical, and Physical Data Models
You’ve made it to the deep end of the pool where you get a real workout as you lap through the 3 levels of data models: subject area (or conceptual), logical, and physical. Just as there are different strokes for different folks, there are different models for different audiences. By the end of section 3, you’ll be able to swim through the intricacies of a data model like a barracuda, and you’ll know more than many working data modelers do today. Calling one’s self a data modeler and being a data modeler are two very different things indeed.
SECTION IV: Data Modeling Quality
Just as swimmers can kick-start their movement through the water with the use of swimming aids (maybe a flotation device or fins will help), you can utilize Steve’s 4 favorite templates to collect and organize the requirements that will define your data model. They allow you easily wade through the river of requirements you’ll collect. You may recall the scorecard the Olympic judges use to rate a dive. Steve introduces his Data Model Scorecard, which applies a quality rating to a data model. It’s an objective look at the quality of the model built. I too have seen how a number of quick and dirty data models eventually led to the untimely death of the databases for which they were created. I’m convinced that applying the scorecard helps us deliver a higher quality data model resulting in a higher quality database that will live a much longer and healthier life. We are actually adopting this tool where I work, after applying our own weightings to his 10 criteria.
One of the things I like about this author is that he’ll be the first to tell you that he doesn’t know everything. He isn’t shy about asking for help. And so he does engaging three of the most renowned go-to-guys in the data world, all experts in their own right. In this section, Graeme Simsion wrote a chapter on working with others. In the next section, Bill Inmon wrote a chapter on unstructured data, and Michael Blaha wrote a chapter on UML. I was very pleasantly surprised to find that all three contributions together helped complete the pieces missing in the first edition.
Graeme Simsion shares his advice on working with others from the viewpoint of the data modeler as a consultant. Here he takes a look at the other side of the quality question, and cautions you not to get too focused on quality to the point that it becomes a detriment to the success of the project. I saw myself as he talks about getting caught in the perfection trap. I’ll heed his advice that the data model is ultimately judged by how valuable it is to a project’s success and not by how close it is to perfection. He also provides his own list of seven habits for highly effective data modelers to help us stay on track. My biggest take-away: the client should own the deliverable, not the data modeler.
SECTION V – Beyond Data Modeling
Believe it or not, you’re ready to leave the pool and jump head first into a small part of the ocean of outside influences that affect a data modelers’ work. Very few (if any) of us work in a vacuum, except maybe an astronaut. Bill Inmon tackles unstructured data with taxonomies. Here he simply provides the best explanation about taxonomies and ontologies that I’ve found. And I’ve seen far more than I care to admit! Thanks to Bill and his clear explanation, they are both far simpler to understand than they sound.
Michael Blaha, who literally wrote the book on the subject of the Unified Modeling Language (UML), follows with an introduction about UML. You are getting it straight from the best. You can’t help but notice the similarities between a data model diagram and a class diagram as he describes the later. They both describe a data structure using similar components, and you can generate a database schema by applying a set of similar rules to either. Steve ends by answering the 5 most frequently asked modeling questions that he has encountered.
I dare say, by the end of the book you could know more than most data modelers working today. This revision took the first edition up several notches from what some deemed a data modeling for dummies book, to what is now a full-fledged textbook. It’s easy to see how it could quickly and easily light the way for many future data modelers in any classroom. I have it on good authority that the author wrote this book to be the most easy-to-read and comprehensive data modeling text on the planet. I agree. This is in itself a wonderful way-finding tool for data modelers that’s very easy on the eyes and complete in its coverage. It will save you countless hours of confusion, and possibly even years of data modeling in the dark. The only really big flaw I found with this book is that it didn’t exist 15 years ago when I started as a beginner lost in a dark classroom somewhere in Texas, oh so many years ago.
Steve Hoberman has been a data modeler for over 30 years, and thousands of business and data professionals have completed his Data Modeling Master Class. Steve is the author of nine books on data modeling, including The Rosedata Stone and Data Modeling Made Simple. Steve is also the author of Blockchainopoly. One of Steve’s frequent data modeling consulting assignments is to review data models using his Data Model Scorecard® technique. He is the founder of the Design Challenges group, creator of the Data Modeling Institute’s Data Modeling Certification exam, Conference Chair of the Data Modeling Zone conferences, director of Technics Publications, lecturer at Columbia University, and recipient of the Data Administration Management Association (DAMA) International Professional Achievement Award.
Please complete all fields.