Enterprise Integration & Modeling: Metadatabase Research Home
 home || MDB Research | Virtual Lab | Case Tool | Downloads | Publications | Researchers | Links

Frequently Asked Questions

IBMS Installation:

Running IBMS:

Using IBMS: Conceptual Questions

Using the TSER constructs for Information Modeling

Entity Relationship Modeling

Object Oriented Modeling

Activities - Data Flows oriented Modeling

Using IBMS: Bugs, Technical Questions from users, etc.

More Information:


IBMS Installation:

1. "No CTL3D.DDL" Message?

IBMS requires CTL3D.DLL in &quotC:\WINDOWS\SYSTEM" directory. You may find this from the MicroSoft Library.

2. System Crashes After Download?

You could ftp to ftp.rpi.edu, then change to directory pub/www to download the software using binary mode. It's self-extractive file.

3. Unix Version of IBMS?

Yes, we do. Please Contact with Dr. Gilbert Babin for more info.


Running IBMS:

1. What could be the input for IBMS?

You can input the top level ("Functional" or "Application" level) view of the system you are trying to model, which includes data elements, semantic associations between them, various rules within applications and rules between applications. This input can be given using either the semantic modeling constructs provided by TSER, or by using other methods like Data Flow Diagrams and IDEF0 modeling constructs.

Alternatively, you can start at the "Structural" level of system modeling, e.g. a normalized ER structure of the system.

2. What could be the output of IBMS?

You can have normalized schema/tables generated for your application. Also, you could populate application's metadata for advanced usage, i.e, you can convert the schema into DBMS code that can be implemented immediately.

3. Where to Report Program Bugs?

Please send your comments.


Using TSER constructs for Information Modeling

ENTITY RELATIONSHIP MODELING

1. How to do ER type modeling using TSER constructs?

There are two ways a data modeler might want to use TSER to model entities and relationships; one is by using the (OER) Entity-Relationship diagram and the other is using the higher level (SER) Subject-Context diagram. It is also possible to capture knowledge in form of rules using TSER.

2. How does TSER modeling differ from ER modeling and Object Oriented modeling?

They are different, indeed. TSER covers horizontally (through the SER construct) both data modeling and knowledge modeling in the same framework, and vertically (SER and OER) both semantic/functional modeling and logical data modeling/database design. The traditional ER models do not include knowledge and do not support dependency analysis within an entity or a relationship. When comparing TSER to object models, the latter subordinate knowledge to data (i.e., lacking the independent representation of inter-subject knowledge as contexts) and do not support dependency analysis, either. Thus, the design of the CASE tool is heavily biased towards implementing TSER as opposed to equally supporting ER and OO. We're enhancing the ERs and OO personalities of TSER to support these common uses, since the software is now provided to the community at large. Even when we accomplish this completely, ER is still not best compared to SER, but OER. For some disciplined ER models which are similar to OO, maybe we can re-interpret TSER's OO personality (plus OER) for this branch of ER. A comparison of these two would then make direct sense.

3. What is the use of a "context" construct? Is it like a "Relationship" construct? Does it help in SER (functional) to OER (structural) mapping?

At the SER (functional) level, subject is the construct for data modeling and context for knowledge only. The mapping of SER to OER is concerned with data, hence the mapping is centered around subjects. When you interpret a context to be a relationship as in an ER model, then you'd be disappointed to find that contexts have no impact on the results of the normalization called OER modeling. We have done some work to relate to knowledge in contexts (in the form of production rule) as well as in subjects to OER models and implement them directly into a database whose DBMS supports some rule programming facilities (e.g., triggers and C library). But, primarily, the integration of data and knowledge takes place in another product of TSER, the metadatabase. This purpose of TSER is truly unconventional and brought about some of its "oddest" features. All the models of an enterprise, SER's and OER, are populated into a structure of the metadatabase, which then uses them to manage the multiple application systems (local databases) for repository uses, global queries, updates, and events management. This is why in the SER modeling dialog windows you'll find questions asking for software resources/conversion routines and so on. In terms of objectives, an integrator separate from the target application system(s) is also being readied as the information models are being created by using TSER. The metadatabase is NOT required of the use of TSER, of course. However, it shapes our design scope.

OBJECT ORIENTED MODELING

1. How to do Object Oriented Modeling using TSER constructs?

consider a subject as an object in the object oriented paradigm. The context in this case would store inheritance relationships between objects.In general, "Global rules" that involve data items from more than one subject (object) will be grouped into contexts. The design objective here is to expose these rules to modelers and users for: consolidation of interactive-object behavioral/knowledge modeling, rulebase design, and knowledge model adaptability.

2. In my vocabulary, the term "Inheritance" means "is-a" or "Full Inheritance" (for instance in C++ or Smalltalk). What exactly are the "Partial", "Referential", and "Composing" types of "Inheritance"?

When we do object-oriented modeling through TSER, the construct to use is SER. The basic prototype is a simple hierarchy of subjects (object classes) where a "sub-subject" is linked through "is-a" inheritance to its "super-subject" and the link is represented as a context with "full" inheritance. The modeler can then leave the SER model as is, or map it to an OER and even create an EXPRESS schema for it. In the latter case, the OER will become the lowest level of the O-O hierarchy. The reason that one might want to normalize the O-O model is either to implement the model by using a relational platform (Smalltalk still does this, I think), or to integrate the model with other models in the enterprise. Now, why do we also have other types of inheritances? The answer is that we happen to have more semantics to offer once the potentials of OER is tapped into. Specifically, the is-a inheritance implies an MR type of integrity rule; so does the complex object interpreted as composing inheritance. However, there are other possibilities. One is what the C++ language (and some others) call "attribute type", which is not as strict as is-a and implies only an FR type of integrity (i.e., the removal of the super object does not necessarily mandate the same to the sub object). We call this type of inheritance "referential" and the system will create the FR rules for it automatically. Then, the modeler has the option to inherit not all but a set of attributes - which might be defined thru some functions - from its super subject(s) (especially useful in a multiple inheritance situation); this is referred to as "partial" inheritance. The internal integrity checking of the system will work differently when the user specify partial as opposed to full. More interesting is, perhaps, the fact that relational type of semantic association of multiple subjects do not subscribe to any strict "inheritance" in either of the above interpretations (full or referential); the "partial" type of inheritance defines this class of semantics very well. PR type of integrity rules will result from a partial inheritance.

ACTIVITIES AND DATA FLOWS ORIENTED MODELING

1. How to go about DFD or IDEF0 modeling using TSER?

The IBMS case tool provides separate modules to do DFD and IDEF0 modeling. Then these models can be mapped by the software to generate an equivalent SER (semantic) model of the system (only DFD -> SER mapping is currently available, IDEF0 module is in the pipeline). After you have the SER model, you can use the software to modify it semantically, map it into a structural (OER) model and/or generate the DBMS code to be implemented.

2. How exactly does the software map the DFD into the SER?

All the processes/activities in DFD are converted into "contexts" in SER. All the data flows between process->process and process->external entity are converted into "subjects". All data stores become "subjects". The data flows to data stores are not modeled as separate subjects but form part of the data store subject.

3. Is this mapping perfect? Or does the user have to modify the SER thus produced?

It is possible in the DFD to repeat semantic information in form of identical data flows. In such a case the SER produced would contain two identical subjects, which the user might want to merge manually. It is not required, though, and normalization process from SER->OER will take care of this redundancy.

OTHER CONCEPTUAL MODELING QUESTIONS

1. How do you handle situations where the same data item plays different semantic roles in a model?

In TSER, we assume that all data items in the same application model will have distinct names. Thus, a way to implement the case is to define different names for the different roles of the same data item but then establish that they are all equivalent by using the equivalence facility of SER modeling. The different names will appear in all models created therefrom (OER, schema, etc.), but the system will interpret the models and generate integrity rules according to their equivalence.

2. From a "functional" point of view, there may be many different "applications" in an "enterprise". How do you integrate them?

In TSER, SER (semantic representation) is the main interface with users and other models. The philosophy is that semantics are the main issue in integration (and, indeed, in all modeling) and the structural consolidations should follow and go next. Thus, the system asks only the user to indicate how two SER models are related (how should they be associated/concatenated) in terms of the contexts (business rules/events/flows/inheritances) linking them and the equivalences among data items. Once this is done, the system then moves to consolidate them at the OER (structural) level by combining the entities and relationships derived from the SER models that have identical keys (including keys that have equivalent attributes). When compared to the concatenation of two object-oriented models which always face the hard issues of semantics resolution, object class redefinition, and object hierarchies modification, TSER separates these issues and resolves them by employing a neutral structure at the OER level for consolidation, based on semantics input from the SER level. The second step by design is largely carried out by the system and the user will see only the initial simple concatenation. The tool manual does, however, expose the OER consolidation module for users who want to have access to this step, too.

3. Explain "Global" and "Local" data items.

Because of the potential population of models into an integrating metadatabase, we want to know whether or not the data items being defined use a global standard. If they are not, then there's the need to convert the formats used in application systems (databases) among data items that are equivalent but distributed in different systems. A question here is, when the equivalent data items are stored in the same application system and thus do not need any conversion at all - the case when TSER is used for traditional data modeling, how should one go about this in the definition of equivalence? A quick and dirty way is to declare both/all being global, since the system being modeled is the only system and hence the global system. But we will think more thoroughly on these "metadatabase bound" features, as they will not be needed in single system applications. At least, we'll provide better default/suggestion - e.g., global - and online help on this definition.

4. How do you implement all the integrity rules that TSER generates?

The four types of OER (structural) constructs define four types of integrity rules. They are always included in the suggested schema that TSER produces. Whether or not these rules can be automatically created by TSER for implementation into a database, however, depends on the integrity control facilities provided in the DBMS. In latest systems, the key integrity rules (Entity and PR) are commonly available and TSER is able to incorporate them for the schema generators provided. Referential integrity (FR) and existence rules (MR), on the other hand, are not automatically implementable yet in most cases. As you know, even Oracle v7 that provides triggers (and nominal referential rules) has some basic difficulty in this implementation. At this point, we do what the DBMS's have to offer in their standard facility. When the rule implementation/creation capability is developed (see #2 above), we'd be able to implement all integrity rules and more into the schemas.

5. How to "Picture" the SER modeling perspective, from an application developer's point of view?

To "picture SER", you probably should try the following perspectives:

1. Application: Consider a basic (non-decomposed) subject being a report, a form, or something corresponding to a (relational) view. On this basis, a subject can also be used to represent an application, application family, activity, or process that can be decomposed (in the manner of DFD/IDEF) into the basic subjects. In this sense, the subject is the exclusive carrier of data semantics while the context defines expressly the interaction rules of data flows among subjects. Thus, contexts are indeed semantic relationships of the semantic entities we now call subjects, except that we don't define data items in these relationship (but merely refer to them in the rules). A context can be compared to a process of DFD in terms of its contents. The subtle difference here is, of course, that a DFD is focused primarily on processes while an SER is anchored methodologically in subjects - this reflects our roots in databases as opposed to application software that DFD/IDEF0/Petri net are. It is also appropriate to use a subject as a universe relation. The single-subject hospital example shows this idea. I guess, the result (OER) is highly sensitive to the FD's that one formulates. The resulting OER is designed to be at least in Third Normal Form. The user might want to customize it for system performance purposes. If TSER does not perform acceptably in your cases, let us know. Then we might need to review the mapping algorithms and/or the input.

2. Object: consider a subject as an object in the object oriented paradigm. The main difference here would be the context. "Global rules" that involve data items from more than one subject (object) will be grouped into contexts. The design objective here is to expose these rules to modelers and users for: consolidation of interactive-object behavioral/knowledge modeling, rulebase design, and knowledge model adaptability.

3. Mixed: the above two perspectives can be combined to provide maximum flexibility for semantic data modeling (in the tradition of database design) and information modeling (in the tradition of systems analysis).

A modeler could proceed along either of these two lines: (1) talk to the users about the fixed end-user products of the intended database and model these products as subjects, and then use contexts to model some operating constraints/events/rules etc. for application software development - then to compose/aggregate/generalize them into higher level subjects if desirable; or (2) follow the tradition of structured system analysis and define an application subject for the entire application system, and then decompose from there until the end-user products are defined. In our examples, the banking case shows the first approach, and the CIM case the second. The hospital case has two versions, one shows the universe relation approach (single subject) and the other object-oriented. The project management case in the book, which uses DFD as the front end, shows the comparison of SER with DFD as well as the system analysis approach.

6. What is a functional dependency?

Once you've entered the data items for a subject, you must specify how those items are related to each other. You define these relationships through functional dependencies (FDs).

An FD consists of two parts: the determinant, usually the left side; and the determined, usually the right side. Each side is made up of one or more data items. The logic of an FD is as follows: for each instance of the determinant there is exactly one corresponding instance of the determined. In other words, the value of the determinant uniquely determines the value of the determined. The converse is not necessarily true; many determinants may be associated with a single determined.

One of the most common FDs occurs when a subject has an ID that uniquely identifies it. Then the data items describing that subject are determined by the ID. For example, every US citizen has a social security number. Other information about a person, such as the name and date of birth, is uniquely identified the social security number. This relationship is described by the following FD:

SS# ---> NAME DOB

In this case, the determinant is SS# and the determined is NAME DOB. Note that the social security number is not uniquely identified by the name and date of birth. Two people can be born on the same date and given the same name, but they will have different social security numbers.

There are three main types of FDs:

Traditional FDs ( ---> ) are the same in structure as the example given above. Many different values of the determinant may be associated with a single value of the determined; thus traditional FDs are also called many-to-one FDs. The relationship of child to mother is a traditional FD. The child's mother is unique for each child, but one mother may have many children. This relationship is represented by the FD:

CHILD ---> MOTHER

One-to-one FDs ( <--> ) are the combination of two traditional FDs: each side is both determinant and determined. A simple example is the relationship between husband and wife. Each husband has one wife, and each wife has one husband. The corresponding FD is:

HUSBAND <--> WIFE.

This FD is equivalent to the pair of traditional FDs:

HUSBAND ---> WIFE

WIFE ---> HUSBAND

A special case of a one-to-one FD is a multi valued dependency (MVD), in which the two sides are identical. MVDs define many-to-many relationships between the data items. This type of FD is used to preserve a relationship where the combination of the data items has some significant meaning. For example, the relationship between grandparent and grandchild is an MVD. A grandparent may have many grandchildren; likewise a grandchild may have several grandparents. The combinations of grandparent with grandchild identify which children are grandchildren of which adults. This relationship is expressed by the FD:

GRANDCHILD GRANDPARENT <--> GRANDCHILD GRANDPARENT

Mandatory relationships ( <<-- ) are many-to-one relationships with an additional constraint. Each instance of the determinant is "owned" by the corresponding instance of the determined. Thus the deletion of the determined requires the deletion of the determinant. For any married member of a family, a mandatory relationship exists between that member's spouse and in-laws (members of the spouse's family). If the spouse leaves the family, the in-laws are no longer part of the family. The FD that expresses this is:

IN-LAW <<-- SPOUSE

Note that each in-law "belongs" to a particular spouse; that is, the in-law is the determinant, and the spouse is the determined. Within every MVD is a traditional FD. You do not have to define the traditional FD separately; it is implied by the MVD. In this case, the underlying traditional FD is:

IN-LAW ---> SPOUSE

In an MVD, many instances of the determinant are owned by the determined. The double arrow illustrates the many side of the relationship and the direction of ownership, not the direction of determination.

Some General Properties of FD's

There are some general properties of FDs that will be helpful for general FD declaration.

UNION

If A --> B and A --> C then A --> B, C

Example:

SS# --> NAME

SS# --> ADDRESS

SS# --> PHONE

is equivalent to

SS# --> NAME, ADDRESS, PHONE

DECOMPOSITION

If A -->B, C then A --> B and A --> C

Example:

(STUDENT_ID, COURSE) --> TEACHER, GRADE

is equivalent to

(STUDENT_ID, COURSE) --> GRADE

(STUDENT_ID, COURSE) --> TEACHER

TRANSITIVITY

If A --> B and B --> C then A --> C

Example:

STUDENT_ID --> NAME, MOTHER, FATHER

FATHER --> WORK_PHONE

leads to STUDENT_ID --> WORK_PHONE

Whenever possible, it is best to simply declare FDs A --> B and B --> C.

Simplification

Avoid FD declarations as follows:

A --> B, C

B --> C

Model the above as:

A --> B

B --> C

Note: For a more complete discussion on Functional Dependency, refer to the following text "An Introduction to Database Systems," Sixth Edition, by C. J. Date, Addison-Wesley, 1995.

7. What is IDEF0?

IDEF: ICAM Definitional System

The integrated computer-aided manufacturing project launched by the Air Force in the late 1970s developed an .i.enterprise modeling; system called ICAM definition or IDEF. This system consists of IDEF0 (read IDEF zero) and a few other components for the modeling of integrated manufacturing systems with IDEF0 performing structured systems analysis at the functional-level and others performing entity-relationship data modeling, process simulation, and .i.database design;. However, only IDEF0 has received wide and common acceptance in the industry. IDEF0 is based on the structured analysis and design technique (SADT), which is a graphical diagramming method developed in the 1970s.

The basic constructs of IDEF0 are the following, which are symbolized in the figure:

  1. Activity-subject. The functional elements of the system; always labeled by a phrase starting with a verb and decomposable.
  2. Input. The logical or physical objects to be turned into the output of the activity.
  3. Output. The logical or physical objects turned out by the activity.
  4. Control. The logical means of the activity, such as control algorithms. design specs, or process plans.
  5. Mechanism. The physical means of the activity, such as workstations and automatic guided vehicles.

Note that outputs from a box may become inputs or controls (or mechanisms) for other boxes and that input, output, and control define the interface of a box. All inputs to a box should generally sum up to its outputs, logically and physically. The inclusion of mechanism is optional.

The modeling methodology entails an iterative and "zoom-in, zoom-out" type of detailing hierarchy. Specifically, an IDEF0 model begins with a context diagram showing the whole system being modeled as a black box, i.e., a single activity-subject with its input, output, control, and mechanism. The purpose of the context diagram is to depict the scope and boundary of the model. The box is decomposed into levels of detailed IDEF0 submodels, with each submodel showing several boxes connected through their input, output and control arrows and pertaining to a particular subject at the level immediately higher than it. The boxes in all submodels at all levels are carefully numbered according to the decomposition hierarchy to maintain a logical order. Once decomposed, a subject is fully and completely replaced by its submodels; its existence is merely a matter of convenience for presentation, communication, and record keeping. The decomposition of the context diagram into level 1 is obviously manda.tory; further decomposition, however, is a choice of modeling, dictated only by need. When decomposing one box at a level, the other boxes at the same level can either also be decomposed or can stay undecomposed, it is purely a modeling choice. At the completion of this decomposition process, collecting all submodels at the leaves of the decomposition tree should amount to a complete model without having to involve any higher-level boxes from which the submodels are decomposed.


TSER in A Nut Shell

In a nut shell, the SemanticER can be used most easily this way: the Subject is either comparable to Object (in the OO personality) or to Form/application (in a trational requirements analysis methodology). The Context, on the other hand, can be compared to Process which Input/output or these forms/applications and thereby interact plural Subjects. All logic and constraints of the process are represented in production rules in the Context. Rules/triggers can also be included in the Subject if they pertain only to the subject. Functional dependencies are then declared for the data items included in each Subject. The system normalizes (at the level of BCNF with integrity rules) these Subjects into OperationalER. Thus, the resulting ER (in four types: Entity and three Relationships) is comparable to the (final refinement of the) traditional ER which follows some rigorous discipline to ensure relational soundness. The SemanticER does not bear direct resemblance to traditional ER, while OperationalER does. Two of the three Relationships are pure integrity representations (one-to-many and many-to-one) relating Entities and the (Plural) Relationship (many-to-many).


Using IBMS: Bugs, Technical Questions from users, etc.

1. In the "Application Definition" window, when using the cim.ser example:

  • why do some items in the item list have a $ at the front or in the middle?
  • why does trying to edit them cause an error: "STOP invalid operation"?

The $ at the front indicate a system created items, so you won't be able to modified it. The $ in the middle usually indicate the item's class type in object-oriented model, you could modified it through "Subject Definition" by using "Add" then "Edit" button.

2. With the DFD tool, invoking the FILE menu bar item, invoking the "Map to SER" menu item, all that happens is a that a message pops up saying "Please explore the process in Context Diagram" --- How can I get IBMS to carry out this DFD to SER mapping?

We refer to the first level as the Context Level. After creating the process, you should decompose it into lower level(s), and define the DFD completely. The "Map to SER" will carry the information from the lowest level's processes and all the connected components to SER file. In other words, the system won't map the context level's information into SER.

3. In the SER Modeling tool: How do I lock the movement of objects/components to move only in
horizontal/vertical directions?

Use <CTRL> left-button.

4. In the SER Modeling tool: How do I make my links right-angled instead of straight line links?

  1. Click on the link object.
  2. use <CTRL> left-button. You will see a rectangular box that you can fit to the linkage to make it right-angled.

5. In the Subject Definition window: "class type" in the upper right: I cannot find anything explaining this
in the Help texts. Please explain.

If you "search" for the topic of "object-oriented", then there is an entry point for "Multiple Inheritance", click on that, you'll see a detailed explanation.

6. In the Process Definition window, the facility for entering rules is always inactive --
not yet implemented?

Right, we are planning to implement this part in next version. However, you could use SER modeling tool to input the rules in SER file which is mapped from the DFD file.

7. Am I right in thinking that there was / is no way of changing the level on which something is defined ? I.e. in this case there was / is no way to insert a "new" topmost level and "push down" the old root-level to be below it ? This seems like an important consideration for SER modeling also, in that it looks as though there is no way to modify the structure (in terms of what stuff is on which level) of the overall system (other than destroy and rebuild manually) ? Is this impression correct ?

Yes, this is an excellent point. In fact, we thought about implementing this feature before. Anyway, we'll seriously consider this functionality in the next version.

8. Is the IBMS 1.1 manual valid for IBMS 1.2 ?

Yes. Also, the online help now contains more detailed information, e.g. Object-Oriented topic, examples for building FDs, ...and so on. You can print it out topic by topic.

9. I am a bit puzzled by the apparent behavior of the Item Definition Window in the SER Modeler. Most of the time, the Format part has only the Basic Types and NO Object radio button (giving access to a list of known classes), but sometimes this additional possibility IS there. I have not yet worked out yet exactly what the conditions are for the one or the other case to occur, however. Is this (seemingly inconsistent) behavior intentional ? (I realize that when one is defining the very first Subject of all, then the system cannot yet propose any others.)

Yes, the "object radio button" in the Item Dialog is only activated when user is not on the enterprise level, i.e., user is working on the decomposed subjects. The reason behind this is we are able to make easier control on object-oriented behavior. BTW, I have updated the displayed message so user can understand better.

10. When I invoke the "Integrate SERs" menu item in the SER Modeler, a window pops up with the title "open integrated SER file". What does it mean?

It means "open another SER file that you want to integrate with the one currently in the SER Modeler". We will change the dialog title to make it more user-friendly.

11. At what point should I give the "Equivalence Definition"? What is meant by "Global" and "Local" items ?

The equivalence is useful for mapping from SER to OER. The purpose for declaring equivalent items is for converting data format while doing data transfer among heterogeneous environments. You could define it anytime after you specify all items for applications and before you start to do the mapping. The "Global" items will carry the global format of data instance. Sometimes the data should be converted to local format and stored in "Local" items. For example, if app1.i1 and app2.i2 are equivalent, both are using global format, then it doesn't matter which one should be defined as "Global" item since the conversion function is not required(this is also a usual case for equivalent items reside at same application since they using the same format). If i1 is using global format and i2 is using local format, then not only i1 is defined as "Global" item; i2 is defined as "Local" item, the conversion function should be specified as well(this is the usual case for items residing at different applications).The same items appear in Global and Local is just for user's convenience to declare equivalence. However, once an item is defined as "local", then it shouldn't be redefined as "global" later on, vice versa. For more information, check the help facility.

12. Do the Contexts included in a .ser model, when they are *not* used to define O-O inheritances, actually have any influence on the results of the SER --> OER mapping ?

The mapping algorithms for SER (-> OER) will take the semantic meanings, i.e., Functional Dependencies, from Subjects and normalize/consolidate them into OER. Therefore the Contexts/Flow Directions won't affect the mapping results(w/wo defining O-O model). However, the Contexts can carry the required knowledge(rules) among Subjects. These rules later on will be used for controlling the processes intra/inter applications. Some more detailed descriptions could be seen in overview thru Help facility.

13. How to run the schema generating file, .sql, for Access97?

To create the database in ACCESS 97, follow the step below:

  1. Check to see if any ACCESS 97 reserved word is used as item/table, e.g. date. If so, either go back to the model (SER or OER) to change it and rerun from there, or change through an editor.
  2. Invoke ACCESS 97.
  3. Go to "File" pull down menu, select "Open Database".
  4. In "Open Database" window, change directory to IBMS homepage directory, choose "COMPSQL.MDB", and click "Ok".
  5. The "Convert/Open Database" windows will appear, mark "Convert Database" option and click "Ok".
  6. In the "Convert Database In" window, input "File Name:" field (default value is db1.mdb), and click "Save".
  7. Then the "Enter File Name" window will appear, input path and .sql file name (don't forget the extension .sql), and press "Enter".
  8. Finally, through "Enter Database Name" window, input the name of database to be created from .sql file, and press "Enter".

More Information:

  1. What is "IBMS"?
    IBMS is a CASE-tool for developing application, please see Help Windows for more details.
  2. Want to Get More License Info or Technical Papers Concerning IBMS Research/General Metadatabase Research?
    Please contact Dr. Cheng Hsu.
 

viu.eng.rpi.edu is hosted by Professor Cheng Hsu.
Rensselaer Polytechnic Institute
Department of Industrial and Systems Engineering (formally Decision Sciences & Engineering Systems)
110 8th St., Center for Industrial Innovation, Room 5123, Troy, NY 12180-3590

Copyright © 1997-2016. MetaWorld, Nothing on this site may be commercially used without written consent.

Valid XHTML 1.0! Valid CSS!