Skip Headers

Oracle9i Database Administrator's Guide
Release 2 (9.2)

Part Number A96521-01
Go To Documentation Library
Home
Go To Product List
Book List
Go To Table Of Contents
Contents
Go To Index
Index

Master Index

Feedback

Go to previous page Go to next page

21
General Management of Schema Objects

This chapter describes schema object management issues that are common across multiple types of schema objects. The following topics are presented:

Creating Multiple Tables and Views in a Single Operation

You can create several tables and views and grant privileges in one operation using the CREATE SCHEMA statement. The CREATE SCHEMA statement is useful if you want to guarantee the creation of several tables, views, and grants in one operation. If an individual table, view or grant fails, the entire statement is rolled back. None of the objects are created, nor are the privileges granted.

Specifically, the CREATE SCHEMA statement can include only CREATE TABLE, CREATE VIEW, and GRANT statements. You must have the privileges necessary to issue the included statements. You are not actually creating a schema, that is done when the user is created with a CREATE USER statement. Rather, you are populating the schema.

The following statement creates two tables and a view that joins data from the two tables:

CREATE SCHEMA AUTHORIZATION scott
    CREATE TABLE dept (
        deptno NUMBER(3,0) PRIMARY KEY,
        dname VARCHAR2(15),
        loc VARCHAR2(25)
    CREATE TABLE emp (
        empno NUMBER(5,0) PRIMARY KEY,
        ename VARCHAR2(15) NOT NULL,
        job VARCHAR2(10),
        mgr NUMBER(5,0),
        hiredate DATE DEFAULT (sysdate),
        sal NUMBER(7,2),
        comm NUMBER(7,2),
        deptno NUMBER(3,0) NOT NULL
        CONSTRAINT dept_fkey REFERENCES dept)
   CREATE VIEW sales_staff AS
        SELECT empno, ename, sal, comm
        FROM emp
        WHERE deptno = 30
        WITH CHECK OPTION CONSTRAINT sales_staff_cnst
        GRANT SELECT ON sales_staff TO human_resources;

The CREATE SCHEMA statement does not support Oracle extensions to the ANSI CREATE TABLE and CREATE VIEW statements, including the STORAGE clause.

Renaming Schema Objects

To rename an object, it must be in your schema. You can rename schema objects in either of the following ways:

If you drop and re-create an object, all privileges granted for that object are lost. Privileges must be regranted when the object is re-created.

Alternatively, a table, view, sequence, or a private synonym of a table, view, or sequence can be renamed using the RENAME statement. When using the RENAME statement, integrity constraints, indexes, and grants made for the object are carried forward for the new name. For example, the following statement renames the sales_staff view:

RENAME sales_staff TO dept_30;  

Note:

You cannot use RENAME for stored PL/SQL program unit, public synonym, index, or cluster. To rename such an object, you must drop and re-create it.


Before renaming a schema object, consider the following effects:

Analyzing Tables, Indexes, and Clusters

You analyze a schema object (table, index, or cluster) to:

The following topics are discussed in this section:

Collecting Statistics for Tables, Indexes, and Clusters

You can use the DBMS_STATS package or the ANALYZE statement to gather statistics about the physical storage characteristics of a table, index, or cluster. These statistics are stored in the data dictionary and can be used by the optimizer to choose the most efficient execution plan for SQL statements accessing analyzed objects.

Oracle recommends using the more versatile DBMS_STATS package for gathering optimizer statistics, but you must use the ANALYZE statement to collect non-optimizer statistics, such as empty blocks, average space, and so forth.

Computing Statistics Using the DBMS_STATS Package

The DBMS_STATS package allows both the gathering of statistics, including utilizing parallel execution, and the external manipulation of statistics. Statistics can be stored in tables outside of the data dictionary, where they can be manipulated without affecting the optimizer. Statistics can be copied between databases or backup copies can be made.

The following DBMS_STATS procedures enable the gathering of optimizer statistics:

Computing Statistics Using the ANALYZE Statement

The following statement computes statistics for the emp table:

ANALYZE TABLE emp COMPUTE STATISTICS;

The following query estimates statistics on the emp table, using the default statistical sample of 1064 rows:

ANALYZE TABLE emp ESTIMATE STATISTICS;

To specify the statistical sample that Oracle should use, include the SAMPLE option with the ESTIMATE STATISTICS option. You can specify an integer that indicates either a number of rows or index values, or a percentage of the rows or index values in the table. The following statements show examples of each option:

ANALYZE TABLE emp
   ESTIMATE STATISTICS
      SAMPLE 2000 ROWS;

ANALYZE TABLE emp
   ESTIMATE STATISTICS
      SAMPLE 33 PERCENT;

In either case, if you specify a percentage greater than 50, or a number of rows or index values that is greater than 50% of those in the object, Oracle computes the exact statistics, rather than estimating.

If the data dictionary currently contains statistics for the specified object when an ANALYZE statement is issued, the new statistics replace the old statistics in the data dictionary.

Some Optional Means of Computing Statistics

There are some PL/SQL packaged procedures that effectively allow you to execute an ANALYZE statement. These are:

It is recommended that these packaged procedures only be used for collecting non-optimizer statistics.

See Also:

Validating Tables, Indexes, Clusters, and Materialized Views

To verify the integrity of the structure of a table, index, cluster, or materialized view, use the ANALYZE statement with the VALIDATE STRUCTURE option. If the structure is valid, no error is returned. However, if the structure is corrupt, you receive an error message.

For example, in rare cases such as hardware or other system failures, an index can become corrupted and not perform correctly. When validating the index, you can confirm that every entry in the index points to the correct row of the associated table. If the index is corrupt, you can drop and re-create it.

If a table, index, or cluster is corrupt, you should drop it and re-create it. If a materialized view is corrupt, perform a complete refresh and ensure that you have remedied the problem. If the problem is not corrected, drop and re-create the materialized view.

The following statement analyzes the emp table:

ANALYZE TABLE emp VALIDATE STRUCTURE;

You can validate an object and all related objects (for example, indexes) by including the CASCADE option. The following statement validates the emp table and all associated indexes:

ANALYZE TABLE emp VALIDATE STRUCTURE CASCADE;

You can specify that you want to perform structure validation online while DML is occurring against the object being validated. There can be a slight performance impact when validating with ongoing DML affecting the object, but this is offset by the flexibility of being able to perform ANALYZE online. The following statement validates the emp table and all associated indexes online:

ANALYZE TABLE emp VALIDATE STRUCTURE CASCADE ONLINE;

Listing Chained Rows of Tables and Clusters

You can look at the chained and migrated rows of a table or cluster using the ANALYZE statement with the LIST CHAINED ROWS clause. The results of this statement are stored in a specified table created explicitly to accept the information returned by the LIST CHAINED ROWS clause. These results are useful in determining whether you have enough room for updates to rows. For example, this information can show whether PCTFREE is set appropriately for the table or cluster.

Creating a CHAINED_ROWS Table

To create the table to accept data returned by an ANALYZE ... LIST CHAINED ROWS statement, execute the UTLCHAIN.SQL or UTLCHN1.SQL script. These scripts are provided by Oracle. They create a table named CHAINED_ROWS in the schema of the user submitting the script.


Note:

Your choice of script to execute for creating the CHAINED_ROWS table is dependent upon the compatibility level of your database and the type of table you are analyzing. See the Oracle9i SQL Reference for more information.


After a CHAINED_ROWS table is created, you specify it in the INTO clause of the ANALYZE statement. For example, the following statement inserts rows containing information about the chained rows in the emp_dept cluster into the CHAINED_ROWS table:

ANALYZE CLUSTER emp_dept LIST CHAINED ROWS INTO CHAINED_ROWS;
See Also:

Oracle9i Database Reference for a description of the CHAINED_ROWS table

Eliminating Migrated or Chained Rows in a Table

You can use the information in the CHAINED_ROWS table to reduce or eliminate migrated and chained rows in an existing table. Use the following procedure.

  1. Use the ANALYZE statement to collect information about migrated and chained rows.
    ANALYZE TABLE order_hist LIST CHAINED ROWS;
    
    
  2. Query the output table:
    SELECT *
    FROM CHAINED_ROWS
    WHERE TABLE_NAME = 'ORDER_HIST';
    
    OWNER_NAME  TABLE_NAME  CLUST... HEAD_ROWID          TIMESTAMP
    ----------  ----------  -----... ------------------  ---------
    SCOTT       ORDER_HIST       ... AAAAluAAHAAAAA1AAA  04-MAR-96
    SCOTT       ORDER_HIST       ... AAAAluAAHAAAAA1AAB  04-MAR-96
    SCOTT       ORDER_HIST       ... AAAAluAAHAAAAA1AAC  04-MAR-96
    
    

    The output lists all rows that are either migrated or chained.

  3. If the output table shows that you have many migrated or chained rows, then you can eliminate migrated rows by continuing through the following steps:
  4. Create an intermediate table with the same columns as the existing table to hold the migrated and chained rows:
    CREATE TABLE int_order_hist
       AS SELECT *
          FROM order_hist
          WHERE ROWID IN
             (SELECT HEAD_ROWID
                FROM CHAINED_ROWS
                WHERE TABLE_NAME = 'ORDER_HIST');
    
    
  5. Delete the migrated and chained rows from the existing table:
    DELETE FROM order_hist
       WHERE ROWID IN
          (SELECT HEAD_ROWID
             FROM CHAINED_ROWS
             WHERE TABLE_NAME = 'ORDER_HIST');
    
    
  6. Insert the rows of the intermediate table into the existing table:
    INSERT INTO order_hist
       SELECT *
       FROM int_order_hist;
    
    
  7. Drop the intermediate table:
    DROP TABLE int_order_history;
    
    
  8. Delete the information collected in step 1 from the output table:
    DELETE FROM CHAINED_ROWS
       WHERE TABLE_NAME = 'ORDER_HIST';
    
    
  9. Use the ANALYZE statement again, and query the output table.
  10. Any rows that appear in the output table are chained. You can eliminate chained rows only by increasing your data block size. It might not be possible to avoid chaining in all situations. Chaining is often unavoidable with tables that have a LONG column or long CHAR or VARCHAR2 columns.

Truncating Tables and Clusters

You can delete all rows of a table or all rows in a group of clustered tables so that the table (or cluster) still exists, but is completely empty. For example, consider a table that contains monthly data, and at the end of each month, you need to empty it (delete all rows) after archiving its data.

To delete all rows from a table, you have the following options:

Each of these options are discussed in the following sections

Using DELETE

You can delete the rows of a table using the DELETE statement. For example, the following statement deletes all rows from the emp table:

DELETE FROM emp;

If there are many rows present in a table or cluster when using the DELETE statement, significant system resources are consumed as the rows are deleted. For example, CPU time, redo log space, and rollback segment space from the table and any associated indexes require resources. Also, as each row is deleted, triggers can be fired. The space previously allocated to the resulting empty table or cluster remains associated with that object. With DELETE you can choose which rows to delete, whereas TRUNCATE and DROP affect the entire object.

Using DROP and CREATE

You can drop a table and then re-create the table. For example, the following statements drop and then re-create the emp table:

DROP TABLE emp;
CREATE TABLE emp ( ... );

When dropping and re-creating a table or cluster, all associated indexes, integrity constraints, and triggers are also dropped, and all objects that depend on the dropped table or clustered table are invalidated. Also, all grants for the dropped table or clustered table are dropped.

Using TRUNCATE

You can delete all rows of the table using the TRUNCATE statement. For example, the following statement truncates the emp table:

TRUNCATE TABLE emp;

Using the TRUNCATE statement provides a fast, efficient method for deleting all rows from a table or cluster. A TRUNCATE statement does not generate any rollback information and it commits immediately. It is a DDL statement and cannot be rolled back. A TRUNCATE statement does not affect any structures associated with the table being truncated (constraints and triggers) or authorizations. A TRUNCATE statement also specifies whether space currently allocated for the table is returned to the containing tablespace after truncation.

You can truncate any table or cluster in your own schema. Any user who has the DROP ANY TABLE system privilege can truncate a table or cluster in any schema.

Before truncating a table or clustered table containing a parent key, all referencing foreign keys in different tables must be disabled. A self-referential constraint does not have to be disabled.

As a TRUNCATE statement deletes rows from a table, triggers associated with the table are not fired. Also, a TRUNCATE statement does not generate any audit information corresponding to DELETE statements if auditing is enabled. Instead, a single audit record is generated for the TRUNCATE statement being issued. See Chapter 26, "Auditing Database Use" for information about auditing.

A hash cluster cannot be truncated, nor can tables within a hash or index cluster be individually truncated. Truncation of an index cluster deletes all rows from all tables in the cluster. If all the rows must be deleted from an individual clustered table, use the DELETE statement or drop and re-create the table.

The REUSE STORAGE or DROP STORAGE options of the TRUNCATE statement control whether space currently allocated for a table or cluster is returned to the containing tablespace after truncation. The default option, DROP STORAGE, reduces the number of extents allocated to the resulting table to the original setting for MINEXTENTS. Freed extents are then returned to the system and can be used by other objects.

Alternatively, the REUSE STORAGE option specifies that all space currently allocated for the table or cluster remains allocated to it. For example, the following statement truncates the emp_dept cluster, leaving all extents previously allocated for the cluster available for subsequent inserts and deletes:

TRUNCATE CLUSTER emp_dept REUSE STORAGE;

The REUSE or DROP STORAGE option also applies to any associated indexes. When a table or cluster is truncated, all associated indexes are also truncated. The storage parameters for a truncated table, cluster, or associated indexes are not changed as a result of the truncation.

Enabling and Disabling Triggers

Database triggers are procedures that are stored in the database and activated ("fired") when specific conditions occur, such as adding a row to a table. You can use triggers to supplement the standard capabilities of Oracle to provide a highly customized database management system. For example, you can create a trigger to restrict DML operations against a table, allowing only statements issued during regular business hours.

Database triggers can be associated with a table, schema, or database. They are implicitly fired when:

This is not a complete list. See the Oracle9i SQL Reference for a full list of statements and database events that cause triggers to fire

Create triggers with the CREATE TRIGGER statement. They can be defined as firing BEFORE or AFTER the triggering event, or INSTEAD OF it. The following statement creates a trigger scott.emp_permit_changes on table scott.emp. The trigger fires before any of the specified statements are executed.

CREATE TRIGGER scott.emp_permit_changes
     BEFORE
     DELETE OR INSERT OR UPDATE
     ON scott.emp
     .
pl/sql block
     . 

You can later remove a trigger from the database by issuing the DROP TRIGGER statement.

A trigger can be in either of two distinct modes:

To enable or disable triggers using the ALTER TABLE statement, you must own the table, have the ALTER object privilege for the table, or have the ALTER ANY TABLE system privilege. To enable or disable an individual trigger using the ALTER TRIGGER statement, you must own the trigger or have the ALTER ANY TRIGGER system privilege.

See Also:

Enabling Triggers

You enable a disabled trigger using the ALTER TRIGGER statement with the ENABLE option. To enable the disabled trigger named reorder on the inventory table, enter the following statement:

ALTER TRIGGER reorder ENABLE;

To enable all triggers defined for a specific table, use the ALTER TABLE statement with the ENABLE ALL TRIGGERS option. To enable all triggers defined for the INVENTORY table, enter the following statement:

ALTER TABLE inventory
    ENABLE ALL TRIGGERS;

Disabling Triggers

Consider temporarily disabling a trigger if one of the following conditions is true:

You disable a trigger using the ALTER TRIGGER statement with the DISABLE option. To disable the trigger reorder on the inventory table, enter the following statement:

ALTER TRIGGER reorder DISABLE;

You can disable all triggers associated with a table at the same time using the ALTER TABLE statement with the DISABLE ALL TRIGGERS option. For example, to disable all triggers defined for the inventory table, enter the following statement:

ALTER TABLE inventory
    DISABLE ALL TRIGGERS;

Managing Integrity Constraints

Integrity constraints are rules that restrict the values for one or more columns in a table. Constraint clauses can appear in either CREATE TABLE or ALTER TABLE statements, and identify the column or columns affected by the constraint and identify the conditions of the constraint.

This section discusses the concepts of constraints and identifies the SQL statements used to define and manage integrity constraints. The following topics are contained in this section:

Integrity Constraint States

You can specify that a constraint is enabled (ENABLE) or disabled (DISABLE). If a constraint is enabled, data is checked as it is entered or updated in the database, and data that does not conform to the constraint's rule is prevented from being entered. If a constraint is disabled, then data that does not conform can be allowed to enter the database.

Additionally, you can specify that existing data in the table must conform to the constraint (VALIDATE). Conversely, if you specify NOVALIDATE, you are not ensured that existing data conforms.

An integrity constraint defined on a table can be in one of the following states:

For details about the meaning of these states and an understanding of their consequences, see the Oracle9i SQL Reference. Some of these consequences are discussed here.

Disabling Constraints

To enforce the rules defined by integrity constraints, the constraints should always be enabled. However, consider temporarily disabling the integrity constraints of a table for the following performance reasons:

In all three cases, temporarily disabling integrity constraints can improve the performance of the operation, especially in data warehouse configurations.

It is possible to enter data that violates a constraint while that constraint is disabled. Thus, you should always enable the constraint after completing any of the operations listed in the bullets above.

Enabling Constraints

While a constraint is enabled, no row violating the constraint can be inserted into the table. However, while the constraint is disabled such a row can be inserted. This row is known as an exception to the constraint. If the constraint is in the enable novalidated state, violations resulting from data entered while the constraint was disabled remain. The rows that violate the constraint must be either updated or deleted in order for the constraint to be put in the validated state.

You can identify exceptions to a specific integrity constraint while attempting to enable the constraint. See "Reporting Constraint Exceptions". All rows violating constraints are noted in an EXCEPTIONS table, which you can examine.

Enable Novalidate Constraint State

When a constraint is in the enable novalidate state, all subsequent statements are checked for conformity to the constraint. However, any existing data in the table is not checked. A table with enable novalidated constraints can contain invalid data, but it is not possible to add new invalid data to it. Enabling constraints in the novalidated state is most useful in data warehouse configurations that are uploading valid OLTP data.

Enabling a constraint does not require validation. Enabling a constraint novalidate is much faster than enabling and validating a constraint. Also, validating a constraint that is already enabled does not require any DML locks during validation (unlike validating a previously disabled constraint). Enforcement guarantees that no violations are introduced during the validation. Hence, enabling without validating enables you to reduce the downtime typically associated with enabling a constraint.

Integrity Constraint States: Procedures and Benefits

Using integrity constraint states in the following order can ensure the best benefits:

  1. Disable state.
  2. Perform the operation (load, export, import).
  3. Enable novalidate state.
  4. Enable state.

Some benefits of using constraints in this order are:

Setting Integrity Constraints Upon Definition

When an integrity constraint is defined in a CREATE TABLE or ALTER TABLE statement, it can be enabled, disabled, or validated or not validated as determined by your specification of the ENABLE/DISABLE clause. If the ENABLE/DISABLE clause is not specified in a constraint's definition, Oracle automatically enables and validates the constraint.

Disabling Constraints Upon Definition

The following CREATE TABLE and ALTER TABLE statements both define and disable integrity constraints:

CREATE TABLE emp (
    empno NUMBER(5) PRIMARY KEY DISABLE,   . . . ;

ALTER TABLE emp
   ADD PRIMARY KEY (empno) DISABLE;

An ALTER TABLE statement that defines and disables an integrity constraint never fails because of rows in the table that violate the integrity constraint. The definition of the constraint is allowed because its rule is not enforced.

Enabling Constraints Upon Definition

The following CREATE TABLE and ALTER TABLE statements both define and enable integrity constraints:

CREATE TABLE emp (
    empno NUMBER(5) CONSTRAINT emp.pk PRIMARY KEY,   . . . ;

ALTER TABLE emp
    ADD CONSTRAINT emp.pk PRIMARY KEY (empno);

An ALTER TABLE statement that defines and attempts to enable an integrity constraint can fail because rows of the table violate the integrity constraint. If this case, the statement is rolled back and the constraint definition is not stored and not enabled.

When you enable a UNIQUE or PRIMARY KEY constraint an associated index is created.

See Also:

"Creating an Index Associated with a Constraint"

Modifying, Renaming, or Dropping Existing Integrity Constraints

You can use the ALTER TABLE statement to enable, disable, modify, or drop a constraint. When Oracle is using a UNIQUE or PRIMARY KEY index to enforce a constraint, and constraints associated with that index are dropped or disabled, the index is dropped, unless you specify otherwise.

While enabled foreign keys reference a PRIMARY or UNIQUE key, you cannot disable or drop the PRIMARY or UNIQUE key constraint or the index.

Disabling Enabled Constraints

The following statements disable integrity constraints. The second statement specifies that the associated indexes are to be kept.

ALTER TABLE dept
    DISABLE CONSTRAINT dname_ukey;

ALTER TABLE dept
    DISABLE PRIMARY KEY KEEP INDEX,
    DISABLE UNIQUE (dname, loc) KEEP INDEX;

The following statements enable novalidate disabled integrity constraints:

ALTER TABLE dept
    ENABLE NOVALIDATE CONSTRAINT dname_ukey;

ALTER TABLE dept
    ENABLE NOVALIDATE PRIMARY KEY,
    ENABLE NOVALIDATE UNIQUE (dname, loc);

The following statements enable or validate disabled integrity constraints:

ALTER TABLE dept
    MODIFY CONSTRAINT dname_key VALIDATE;
ALTER TABLE dept
    MODIFY PRIMARY KEY ENABLE NOVALIDATE;

The following statements enable disabled integrity constraints:

ALTER TABLE dept
    ENABLE CONSTRAINT dname_ukey;
ALTER TABLE dept
    ENABLE PRIMARY KEY,
    ENABLE UNIQUE (dname, loc);

To disable or drop a UNIQUE key or PRIMARY KEY constraint and all dependent FOREIGN KEY constraints in a single step, use the CASCADE option of the DISABLE or DROP clauses. For example, the following statement disables a PRIMARY KEY constraint and any FOREIGN KEY constraints that depend on it:

ALTER TABLE dept
    DISABLE PRIMARY KEY CASCADE;

Renaming Constraints

The ALTER TABLE ... RENAME CONSTRAINT statement enables you to rename any currently existing constraint for a table. The new constraint name must not conflict with any existing constraint names for a user.

The following statement renames the dname_ukey constraint for table dept:

ALTER TABLE dept
    RENAME CONSTRAINT dname_ukey TO dname_unikey;

When you rename a constraint, all dependencies on the base table remain valid.

The RENAME CONSTRAINT clause provides a means of renaming system generated constraint names.

Dropping Constraints

You can drop an integrity constraint if the rule that it enforces is no longer true, or if the constraint is no longer needed. You can drop the constraint using the ALTER TABLE statement with one of the following clauses:

The following two statements drop integrity constraints. The second statement keeps the index associated with the PRIMARY KEY constraint:

ALTER TABLE dept
    DROP UNIQUE (dname, loc);

ALTER TABLE emp
    DROP PRIMARY KEY KEEP INDEX,
    DROP CONSTRAINT dept_fkey;

If FOREIGN KEYs reference a UNIQUE or PRIMARY KEY, you must include the CASCADE CONSTRAINTS clause in the DROP statement, or you cannot drop the constraint.

Deferring Constraint Checks

When Oracle checks a constraint, it signals an error if the constraint is not satisfied. You can defer checking the validity of constraints until the end of a transaction.

When you issue the SET CONSTRAINTS statement, the SET CONSTRAINTS mode lasts for the duration of the transaction, or until another SET CONSTRAINTS statement resets the mode.


Notes:
  • You cannot issue a SET CONSTRAINT statement inside a trigger.
  • Deferrable unique and primary keys must use nonunique indexes.

Set All Constraints Deferred

Within the application being used to manipulate the data, you must set all constraints deferred before you actually begin processing any data. Use the following DML statement to set all deferrable constraints deferred:

SET CONSTRAINTS ALL DEFERRED; 

Note:

The SET CONSTRAINTS statement applies only to the current transaction. The defaults specified when you create a constraint remain as long as the constraint exists. The ALTER SESSION SET CONSTRAINTS statement applies for the current session only.


Check the Commit (Optional)

You can check for constraint violations before committing by issuing the SET CONSTRAINTS ALL IMMEDIATE statement just before issuing the COMMIT. If there are any problems with a constraint, this statement fails and the constraint causing the error is identified. If you commit while constraints are violated, the transaction is rolled back and you receive an error message.

Reporting Constraint Exceptions

If exceptions exist when a constraint is validated, an error is returned and the integrity constraint remains novalidated. When a statement is not successfully executed because integrity constraint exceptions exist, the statement is rolled back. If exceptions exist, you cannot validate the constraint until all exceptions to the constraint are either updated or deleted.

You cannot use the CREATE TABLE statement to determine which rows are in violation. To determine which rows violate the integrity constraint, issue the ALTER TABLE statement with the EXCEPTIONS option in the ENABLE clause. The EXCEPTIONS option places the ROWID, table owner, table name, and constraint name of all exception rows into a specified table.

You must create an appropriate exceptions report table to accept information from the EXCEPTIONS option of the ENABLE clause before enabling the constraint. You can create an exception table by executing the UTLEXCPT.SQL script or the UTLEXPT1.SQL script.


Note:

Your choice of script to execute for creating the EXCEPTIONS table is dependent upon the compatibility level of your database and the type of table you are analyzing. See the Oracle9i SQL Reference for more information.


Both of these scripts create a table named EXCEPTIONS. You can create additional exceptions tables with different names by modifying and resubmitting the script.

The following statement attempts to validate the PRIMARY KEY of the dept table, and if exceptions exist, information is inserted into a table named EXCEPTIONS:

ALTER TABLE dept ENABLE PRIMARY KEY EXCEPTIONS INTO EXCEPTIONS;

If duplicate primary key values exist in the dept table and the name of the PRIMARY KEY constraint on dept is sys_c00610, the following rows might be placed in the table EXCEPTIONS by the previous statement:

SELECT * FROM EXCEPTIONS;

ROWID               OWNER      TABLE_NAME      CONSTRAINT
------------------  ---------  --------------  -----------
AAAAZ9AABAAABvqAAB  SCOTT      DEPT            SYS_C00610 
AAAAZ9AABAAABvqAAG  SCOTT      DEPT            SYS_C00610 

A more informative query would be to join the rows in an exception report table and the master table to list the actual rows that violate a specific constraint, as shown in the following example:

SELECT deptno, dname, loc FROM dept, EXCEPTIONS
    WHERE EXCEPTIONS.constraint = 'SYS_C00610'
    AND dept.rowid = EXCEPTIONS.row_id;

DEPTNO     DNAME             LOC
---------- --------------    -----------
10         ACCOUNTING        NEW YORK
10         RESEARCH          DALLAS

All rows that violate a constraint must be either updated or deleted from the table containing the constraint. When updating exceptions, you must change the value violating the constraint to a value consistent with the constraint or to a null. After the row in the master table is updated or deleted, the corresponding rows for the exception in the exception report table should be deleted to avoid confusion with later exception reports. The statements that update the master table and the exception report table should be in the same transaction to ensure transaction consistency.

To correct the exceptions in the previous examples, you might issue the following transaction:

UPDATE dept SET deptno = 20 WHERE dname = 'RESEARCH';
DELETE FROM EXCEPTIONS WHERE constraint = 'SYS_C00610';
COMMIT;

When managing exceptions, the goal is to eliminate all exceptions in your exception report table.


Note:

While you are correcting current exceptions for a table with the constraint disabled, it is possible for other users to issue statements creating new exceptions. You can avoid this by enable novalidating the constraint before you start eliminating exceptions.


See Also:

Oracle9i Database Reference for a description of the EXCEPTIONS table

Viewing Constraint Information

Oracle provides the following views that enable you to see constraint definitions on tables and to identify columns that are specified in constraints:

View Description

DBA_CONSTRAINTS

ALL_CONSTRAINTS

USER_CONSTRAINTS

DBA view describes all constraint definitions in the database. ALL view describes constraint definitions accessible to current user. USER view describes constraint definitions owned by the current user.

DBA_CONS_COLUMNS

ALL_CONS_COLUMNS

USER_CONS_COLUMNS

DBA view describes all columns in the database that are specified in constraints. ALL view describes only those columns accessible to current user that are specified in constraints. USER view describes only those columns owned by the current user that are specified in constraints.

See Also:

Oracle9i Database Reference contains descriptions of the columns in these views

Managing Object Dependencies

This section describes the various object dependencies, and contains the following topics:

First, review Table 21-1, which shows how objects are affected by changes to other objects on which they depend.

Table 21-1 Operations that Affect Object Status  
Operation Resulting Status of Object Resulting Status of Dependent Objects

CREATE [TABLE|SEQUENCE|SYNONYM]

VALID if there are no errors

No change1

ALTER TABLE (ADD, RENAME, MODIFY columns)

RENAME [TABLE|SEQUENCE|SYNONYM|VIEW]

VALID if there no errors

INVALID

DROP [TABLE|SEQUENCE|SYNONYM|VIEW| PROCEDURE|FUNCTION|PACKAGE]

None. The object is dropped.

INVALID

CREATE [VIEW|PROCEDURE]2

VALID if there are no errors; INVALID if there are syntax or authorization errors

No change1

CREATE OR REPLACE [VIEW|PROCEDURE]2

VALID if there are no errors; INVALID if there are syntax or authorization errors

INVALID

REVOKE object privilege3 ON object TO|FROM user

No change

All objects of user that depend on object are INVALID3

REVOKE object privilege3 ON object TO|FROM PUBLIC

No change

All objects in the database that depend on object are INVALID3

REVOKE system privilege4 TO|FROM user

No change

All objects of user are INVALID4

REVOKE system privilege4 TO|FROM PUBLIC

No change

All objects in the database are INVALID4


1 Can cause dependent objects to be made INVALID, if object did not exist earlier.
2 Standalone procedures and functions, packages, and triggers.
3 Only DML object privileges, including SELECT, INSERT, UPDATE, DELETE, andEXECUTE; revalidation does not require recompiling.
4 Only DML system privileges, including SELECT, INSERT, UPDATE, DELETE ANY TABLE, and EXECUTE ANY PROCEDURE; revalidation does not require recompiling.

Oracle automatically recompiles an invalid view or PL/SQL program unit the next time it is used. In addition, a user can force Oracle to recompile a view or program unit using the appropriate SQL statement with the COMPILE clause. Forced compilations are most often used to test for errors when a dependent view or program unit is invalid, but is not currently being used. In these cases, automatic recompilation would not otherwise occur until the view or program unit was executed. To identify invalid dependent objects, query the views USER/ALL/DBA_OBJECTS.

Manually Recompiling Views

To recompile a view manually, you must have the ALTER ANY TABLE system privilege or the view must be contained in your schema. Use the ALTER VIEW statement with the COMPILE clause to recompile a view. The following statement recompiles the view emp_dept contained in your schema:

ALTER VIEW emp_dept COMPILE;

Manually Recompiling Procedures and Functions

To recompile a standalone procedure manually, you must have the ALTER ANY PROCEDURE system privilege or the procedure must be contained in your schema. Use the ALTER PROCEDURE/FUNCTION statement with the COMPILE clause to recompile a standalone procedure or function. The following statement recompiles the stored procedure update_salary contained in your schema:

ALTER PROCEDURE update_salary COMPILE;

Manually Recompiling Packages

To recompile a package manually, you must have the ALTER ANY PROCEDURE system privilege or the package must be contained in your schema. Use the ALTER PACKAGE statement with the COMPILE clause to recompile either a package body or both a package specification and body. The following statement recompiles just the body of the package acct_mgmt:

ALTER PACKAGE acct_mgmt COMPILE BODY;

The next example compiles both the body and specification of the package acct_mgmt:

ALTER PACKAGE acct_mgmt COMPILE PACKAGE;

Managing Object Name Resolution

Object names referenced in SQL statements can consist of several pieces, separated by periods. The following describes how Oracle resolves an object name.

  1. Oracle attempts to qualify the first piece of the name referenced in the SQL statement. For example, in scott.emp, scott is the first piece. If there is only one piece, the one piece is considered the first piece.
    1. In the current schema, Oracle searches for an object whose name matches the first piece of the object name. If it does not find such an object, it continues with Step b.
    2. Oracle searches for a public synonym that matches the first piece of the name. If it does not find one, it continues with Step c.
    3. Oracle searches for a schema whose name matches the first piece of the object name. If it finds one, it returns to Step b, now using the second piece of the name as the object to find in the qualified schema. If the second piece does not correspond to an object in the previously qualified schema or there is not a second piece, Oracle returns an error.

    If no schema is found in Step c, the object cannot be qualified and Oracle returns an error.

  2. A schema object has been qualified. Any remaining pieces of the name must match a valid part of the found object. For example, if scott.emp.deptno is the name, scott is qualified as a schema, emp is qualified as a table, and deptno must correspond to a column (because emp is a table). If emp is qualified as a package, deptno must correspond to a public constant, variable, procedure, or function of that package.

When global object names are used in a distributed database, either explicitly or indirectly within a synonym, the local Oracle resolves the reference locally. For example, it resolves a synonym to a remote table's global object name. The partially resolved statement is shipped to the remote database, and the remote Oracle completes the resolution of the object as described here.

Because of how Oracle resolves references, it is possible for an object to depend on the nonexistence of other objects. This situation occurs when the dependent object uses a reference that would be interpreted differently were another object present. For example, assume the following:

When jward creates the dept_salaries view, the reference to emp is resolved by first looking for jward.emp as a table, view, or private synonym, none of which is found, and then as a public synonym named emp, which is found. As a result, Oracle notes that jward.dept_salaries depends on the nonexistence of jward.emp and on the existence of public.emp.

Now assume that jward decides to create a new view named emp in his schema using the following statement:

CREATE VIEW emp AS 
     SELECT empno, ename, mgr, deptno 
     FROM company.emp; 

Notice that jward.emp does not have the same structure as company.emp.

As it attempts to resolve references in object definitions, Oracle internally makes note of dependencies that the new dependent object has on "nonexistent" objects--schema objects that, if they existed, would change the interpretation of the object's definition. Such dependencies must be noted in case a nonexistent object is later created. If a nonexistent object is created, all dependent objects must be invalidated so that dependent objects can be recompiled and verified and all dependent function-based indexes must be marked unusable.

Therefore, in the previous example, as jward.emp is created, jward.dept_salaries is invalidated because it depends on jward.emp. Then when jward.dept_salaries is used, Oracle attempts to recompile the view. As Oracle resolves the reference to emp, it finds jward.emp (public.emp is no longer the referenced object). Because jward.emp does not have a sal column, Oracle finds errors when replacing the view, leaving it invalid.

In summary, you must manage dependencies on nonexistent objects checked during object resolution in case the nonexistent object is later created.

Changing Storage Parameters for the Data Dictionary

If your database is very large or contains an unusually large number of objects, columns in tables, constraint definitions, users, or other definitions, the tables that make up the data dictionary might at some point be unable to acquire additional extents. For example, a data dictionary table could require an additional extent, but there is not enough contiguous space in the SYSTEM tablespace. If this happens, you cannot create new objects, even though the tablespace intended to hold the objects has sufficient space. To remedy this situation, you can change the storage parameters of the underlying data dictionary tables, just as you can change the storage settings for user-created segments. This allows the data dictionary tables to be allocated more extents. For example, you can adjust the values of NEXT or PCTINCREASE for a data dictionary table.


Caution:

Exercise caution when changing the storage settings for the data dictionary objects. If you choose inappropriate settings, you could damage the structure of the data dictionary and be forced to re-create your entire database. For example, if you set PCTINCREASE for the data dictionary table USER$ to 0 and NEXT to 2K, that table will quickly reach the maximum number of extents for a segment. At that point you will not be able to create any more users or roles without exporting, re-creating, and importing the entire database.


This section describes aspects of changing data dictionary storage parameters, and contains the following topics:

Structures in the Data Dictionary

The following tables and clusters contain the definitions of all the user-created objects in the database:

Table or Cluster Contains definitions for:

SEG$

Segments defined in the database (including temporary segments)

OBJ$

User-defined objects in the database (including clustered tables); indexed by I_OBJ1 and I_OBJ2

UNDO$

Rollback segments defined in the database; indexed by I_UNDO1

FET$

Available free extents not allocated to any segment

UET$

Extents allocated to segments

TS$

Tablespaces defined in the database

FILE$

Files that make up the database; indexed by I_FILE1

FILEXT$

Datafiles with the AUTOEXTEND option set on

TAB$

Tables defined in the database (includes clustered tables); indexed by I_TAB1

CLU$

Clusters defined in the database

IND$

Indexes defined in the database; indexed by I_IND1

ICOL$

Columns that have indexes defined on them (includes individual entries for each column in a composite index); indexed by I_ICOL1

COL$

Columns defined in tables in the database; indexed by I_COL1 and I_COL2

CON$

Constraints defined in the database (includes information on constraint owner); indexed by I_CON1 and I_CON2

CDEF$

Definitions of constraints in CON$; indexed by I_CDEF1, I_CDEF2, and I_CDEF3

CCOL$

Columns that have constraints defined on them (includes individual entries for each column in a composite key); indexed by I_CCOL1

USER$

Users and roles defined in the database; indexed by I_USER1

TSQ$

Tablespace quotas for users (contains one entry for each tablespace quota defined for each user)

C_OBJ#

Cluster containing TAB$, CLU$, ICOL$, IND$, and COL$: indexed by I_OBJ#

C_TS#

Cluster containing FET$, TS$, and FILE$; indexed by I_TS#

C_USER#

Cluster containing USER and TSQ$$; indexed by I_USER#

C_COBJ#

Cluster containing CDEF$ and CCOL$; indexed by I_COBJ#

Of all of the data dictionary segments, the following are the most likely to require change:

Table or Cluster Comments

C_TS#

If the free space in your database is very fragmented

C_OBJ#

If you have many indexes or many columns in your tables

CON$, C_COBJ#

If you use integrity constraints heavily

C_USER#

If you have a large number of users defined in your database

For the clustered tables, you must change the storage settings for the cluster, not for the table.

Errors that Require Changing Data Dictionary Storage

Oracle returns an error if a user tries to create a new object that requires Oracle to allocate an additional extent to the data dictionary when it is unable to allocate an extent. The following error message indicates this kind of problem.

ORA-1653 unable to extend table name by num in tablespace name

If you receive this error message and the segment you were trying to change (such as a table or rollback segment) has not reached the limits specified for it in its definition, check the storage settings for the object that contains its definition.

For example, if you received an ORA-1653 while trying to define a new PRIMARY KEY constraint on a table and there is sufficient space for the index that Oracle must create for the key, check if CON$ or C_COBJ# cannot be allocated another extent. To do this, query DBA_SEGMENTS. If another extent cannot be allocated, consider changing the storage parameters for CON$ or C_COBJ#. See "Example 7: Displaying Segments that Cannot Allocate Additional Extents".

Displaying Information About Schema Objects

Oracle provides data dictionary views and PL/SQL packages that allow you to display information about schema objects. Views and packages that are unique to a particular schema object are described in the chapter of this book associated with that object. This section describes views and packages that are generic in nature and apply to multiple schema objects.

Using PL/SQL Packages to Display Information About Schema Objects

These Oracle supplied PL/SQL packages provide information about schema objects:

Package and Procedure/Function Description

DBMS_METADATA.GET_DDL

Use to obtain metadata (in the form of DDL used to create the object) about a schema object.

The following package procedures provide information about space usage and free blocks in schema objects:

DBMS_SPACE.UNUSED_SPACE

Returns information about unused space in an object (table, index, or cluster).

DBMS_SPACE.FREE_BLOCKS

Returns information about free data blocks in an object (table, index, or cluster) whose segment free space is managed by free lists (segment space management is MANUAL).

DBMS_SPACE.SPACE_USAGE

Returns information about free data blocks in an object (table, index, or cluster) whose segment space management is AUTO.

The following sections contain examples of using some of these packages.

See Also:

Oracle9i Supplied PL/SQL Packages and Types Reference for a description of PL/SQL packages

Example 1: Using the DBMS_METADATA Package

The DBMS_METADATA package is a powerful tool for obtaining the complete definition of a schema object. It enables you to obtain all of the attributes of an object in one pass. The object is described as DDL that can be used to (re)create it.

In this example the GET_DDL function is used to fetch the DDL for all tables in the current schema, filtering out nested tables and overflow segments. The SET_TRANSFORM_PARAM (with the handle value equal to DBMS_METADATA.SESSION_TRANSFORM meaning "for the current session") is used to specify that storage clauses are not to be returned in the SQL DDL. Afterwards, the session-level transform parameters are reset to their defaults. Once set, transform parameter values remain in effect until specifically reset to their defaults.

EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(
     DBMS_METADATA.SESSION_TRANSFORM,'STORAGE',false);
SELECT DBMS_METADATA.GET_DDL('TABLE',u.table_name)
     FROM USER_ALL_TABLES u
     WHERE u.nested='NO' 
     AND (u.iot_type is null or u.iot_type='IOT');
EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(
     DBMS_METADATA.SESSION_TRANSFORM,'DEFAULT');
See Also:

Oracle9i XML Developer's Kits Guide - XDK for detailed information and further examples relating to the use of the DBMS_METADATA package

Example 2: Using DBMS_SPACE.UNUSED_SPACE

The following SQL*Plus example uses the DBMS_SPACE package to obtain unused space information.

SQL> VARIABLE total_blocks NUMBER
SQL> VARIABLE total_bytes NUMBER
SQL> VARIABLE unused_blocks NUMBER
SQL> VARIABLE unused_bytes NUMBER
SQL> VARIABLE lastextf NUMBER
SQL> VARIABLE last_extb NUMBER
SQL> exec DBMS_SPACE.UNUSED_SPACE('SCOTT', 'EMP', 'TABLE', :total_blocks, -
>    :total_bytes,:unused_blocks, :unused_bytes, :lastextf, -
>    :last_extb, :lastusedblock);

PL/SQL procedure successfully completed.

SQL> PRINT

TOTAL_BLOCKS
------------
           5

TOTAL_BYTES
-----------
      10240

...

LASTUSEDBLOCK
-------------
            3

Using Views to Display Information About Schema Objects

These views display information about schema objects:

View Description

DBA_OBJECTS

ALL_OBJECTS

USER_OBJECTS

DBA view describes all schema objects in the database. ALL view describes objects accessible to current user. USER view describes objects owned by the current user.

DBA_CATALOG

ALL_CATALOG

USER_CATALOG

List the name, type, and owner (USER view does not display owner) for all tables, views, synonyms, and sequences in the database.

DBA_DEPENDENCIES

ALL_DEPENDENCIES

USER_DEPENDENCIES

Describe all dependencies between procedures, packages, functions, package bodies, and triggers, including dependencies on views without any database links.

The following views contain information about segments of the database:

DBA_SEGMENTS

USER_SEGMENTS

Describe storage allocated for all database segments, or for segments for the current user.

The following views contain information about extents of the database:

DBA_EXTENTS

USER_EXTENTS

Describe extents comprising all segments either in the database, or segments for the current user.

DBA_FREE_SPACE

USER_FREE_SPACE

List free extents in all tablespaces, or tablespaces owned by the current user.

The following sections contain examples of using some of these views.

See Also:

Oracle9i Database Reference for a complete description of data dictionary views

Example 1: Displaying Schema Objects By Type

The following query lists all of the objects owned by the user issuing the query:

SELECT OBJECT_NAME, OBJECT_TYPE 
    FROM USER_OBJECTS;

OBJECT_NAME                OBJECT_TYPE
-------------------------  -------------------
EMP_DEPT                   CLUSTER
EMP                        TABLE
DEPT                       TABLE
EMP_DEPT_INDEX             INDEX
PUBLIC_EMP                 SYNONYM
EMP_MGR                    VIEW

Example 2: Displaying Column Information

Column information, such as name, datatype, length, precision, scale, and default data values can be listed using one of the views ending with the _COLUMNS suffix. For example, the following query lists all of the default column values for the emp and dept tables:

SELECT TABLE_NAME, COLUMN_NAME, DATA_DEFAULT
    FROM USER_TAB_COLUMNS
    WHERE TABLE_NAME = 'DEPT' OR TABLE_NAME = 'EMP';

TABLE_NAME   COLUMN_NAME   DATA_DEFAULT
----------   ------------- --------------------
DEPT         DEPTNO
DEPT         DNAME
DEPT         LOC           'NEW YORK'
EMP          EMPNO
EMP          ENAME
EMP          JOB
EMP          MGR
EMP          HIREDATE       SYSDATE
EMP          SAL
EMP          COMM
EMP          DEPTNO

Notice that not all columns have user-specified defaults. These columns automatically have NULL as the default.

Example 3: Displaying Dependencies of Views and Synonyms

When you create a view or a synonym, the view or synonym is based on its underlying base object. The ALL/USER/DBA_DEPENDENCIES data dictionary views can be used to reveal the dependencies for a view. The ALL/USER/DBA_SYNONYMS data dictionary views can be used to list the base object of a synonym. For example, the following query lists the base objects for the synonyms created by the user jward:

SELECT TABLE_OWNER, TABLE_NAME, SYNONYM_NAME
    FROM DBA_SYNONYMS
    WHERE OWNER = 'JWARD';

TABLE_OWNER             TABLE_NAME   SYNONYM_NAME
----------------------  -----------  -----------------
SCOTT                   DEPT         DEPT
SCOTT                   EMP          EMP

Example 4: Displaying General Segment Information

The following query returns the name of each rollback segment, the tablespace that contains each, and the size of each rollback segment:

SELECT SEGMENT_NAME, TABLESPACE_NAME, BYTES, BLOCKS, EXTENTS
    FROM DBA_SEGMENTS
    WHERE SEGMENT_TYPE = 'ROLLBACK';

SEGMENT_NAME  TABLESPACE_NAME   BYTES       BLOCKS     EXTENTS
------------  ---------------   ---------   -------    ---------
RS1           SYSTEM                20480        10            2
RS2           TS1                   40960        20            3
SYSTEM        SYSTEM               184320        90            3

Example 5: Displaying General Extent Information

General information about the currently allocated extents in a database is stored in the DBA_EXTENTS data dictionary view. For example, the following query identifies the extents associated with rollback segments and the size of each of those extents:

SELECT SEGMENT_NAME, BYTES, BLOCKS
    FROM DBA_EXTENTS
    WHERE SEGMENT_TYPE = 'ROLLBACK';

SEGMENT_NAME      BYTES          BLOCKS
---------------   ---------      --------
RS1                   10240             5
RS1                   10240             5
SYSTEM                51200            25
SYSTEM                51200            25
SYSTEM                51200            25

Notice that the rs1 rollback segment is comprised of two extents, both 10K, while the SYSTEM rollback segment is comprised of three equally sized extents of 50K.

Example 6: Displaying the Free Space (Extents) of a Database

Information about the free extents (extents not allocated to any segment) in a database is stored in the DBA_FREE_SPACE data dictionary view. For example, the following query reveals the amount of free space available as free extents in each tablespace:

SELECT TABLESPACE_NAME, FILE_ID, BYTES, BLOCKS
    FROM DBA_FREE_SPACE;

TABLESPACE_NAME        FILE_ID     BYTES       BLOCKS
-------------------    ---------   --------    ----------
SYSTEM                         1    8120320          3965
SYSTEM                         1      10240             5
TS1                            2   10432512          5094

Example 7: Displaying Segments that Cannot Allocate Additional Extents

You can also use DBA_FREE_SPACE, in combination with the views DBA_SEGMENTS, DBA_TABLES, DBA_CLUSTERS, DBA_INDEXES, and DBA_ROLLBACK_SEGS, to determine if any other segment is unable to allocate additional extents for data dictionary objects only.

A segment may not be allocated to an extent for any of the following reasons:

The following query returns the names, owners, and tablespaces of all segments that satisfy any of the above criteria:

SELECT a.SEGMENT_NAME, a.SEGMENT_TYPE, a.TABLESPACE_NAME, a.OWNER 
    FROM DBA_SEGMENTS a
    WHERE a.NEXT_EXTENT >= (SELECT MAX(b.BYTES)
        FROM DBA_FREE_SPACE b
        WHERE b.TABLESPACE_NAME = a.TABLESPACE_NAME)
    OR a.EXTENTS = a.MAX_EXTENTS
    OR a.EXTENTS = 'data_block_size' ;

Note:

When you use this query, replace data_block_size with the data block size for your system.


Once you have identified a segment that cannot allocate additional extents, you can solve the problem in either of two ways, depending on its cause:


Go to previous page Go to next page
Oracle
Copyright © 2001, 2002 Oracle Corporation.

All Rights Reserved.
Go To Documentation Library
Home
Go To Product List
Book List
Go To Table Of Contents
Contents
Go To Index
Index

Master Index

Feedback