Oracle® Database Concepts 10g Release 1 (10.1) Part Number B10743-01 |
|
|
View PDF |
Oracle Database 10g represents a major milestone in Oracle's drive towards self-managing databases. It automates many routine administrative tasks, and considerably simplifies key DBA functions, such as performance diagnostics, SQL tuning, and space and memory management. It also provides several advisors that guide DBAs in managing key components of the database by giving specific recommendations along with potential benefit. Furthermore, Oracle Database 10g proactively sends alerts when a problem is anticipated, thus facilitating proactive rather than reactive database management.
This chapter contains the following topics:
The Oracle Universal Installer (OUI) is a GUI tool for installing Oracle software. It automates all installation tasks, performs comprehensive prerequisite checks (such as operating system version, software patches, and capacity), installs selected software components, and performs all post-install configuration.
The installation process is self-contained to automatically set up the required infrastructure for routine monitoring and administration. The Enterprise Manager Database Management Console is automatically configured to let you to get started with database administrative tasks without any manual configuration. The Enterprise Manager Database Console provides all essential functionality for managing a single database, including alert notification, job scheduling, and software management. In addition, all Oracle server components such as the database, listener, management framework, and so on, are configured for automated startup and shutdown.
The Database Creation Assistant (DBCA) is a GUI tool for database creation. It lets you create all possible configurations of the database, be it a stand-alone database, a Real Application Cluster (RAC) database, or a standby database. During the database creation process, the DBCA guides you in setting up an automated disk-based backup and registering the database with a LDAP server, if available. A database created using the DBCA is fully setup and ready to use in all respects.
The Instant Client is the simplest way to deploy a full Oracle Client application built with OCI, OCCI, JDBC-OCI, or ODBC drivers. It provides the necessary Oracle Client libraries in a small set of files. Installation is as easy as copying a few shared libraries to a directory on the client machine. If this directory is accessible through the operating system library path variable (for instance, LD_LIBRARY_PATH
or PATH
) then the application will operate in the Instant Client mode. Instant Client deployment does not require the ORACLE_HOME
environment, nor does it require the large number of code and data files provided in a full Oracle Client install, thereby significantly reducing the client application disk space needs. There is no loss in functionality or performance for an application deployed using Instant Client when compared to the same application running in a full ORACLE_HOME
environment.
See Also:
|
With the Database Upgrade Assistant (DBUA), you can upgrade any database configuration, including RAC and standby, just by answering a few simple questions. It automatically checks that adequate resources are available, ensures adherence to the best practices – such as backing up the database before beginning the upgrade process, replacing the obsolete and deprecate initialization parameters, and so on – and, verifies the successful completion of the operation.
The upgrade process is restartable, allowing it to automatically resume from the point of interruption. You can also get a time estimation of how long the upgrade process is likely to take.
The Oracle Database provides a number of initialization parameters to optimize its operation in diverse environments. Only a few of these parameters need to be explicitly set, because the default values are adequate in the majority of cases.
There are approximately 30 basic parameters. The remainder of the parameters are preserved to allow expert DBAs to adapt the behavior of the Oracle Database to meet unique requirements without overwhelming those who have no such requirements.
Data Pump enables very high-speed data and metadata loading and unloading to and from the Oracle Database. It automatically manages and schedules multiple, parallel streams of load or unload for maximum throughput.
The transportable tablespace feature lets you quickly move a tablespace across Oracle databases. This can be much faster than performing either an export/import or unload/load of the same data, because transporting a tablespace only requires the copying of datafiles and integrating the tablespace structural information. You can also use transportable tablespaces to move index data, thereby avoiding the index rebuilds you would have to perform when importing or loading table data.
Data Pump functionality together with cross-platform transportable tablespace feature provides powerful, easy to use, and high performance tools for moving data in and out of the database.
Oracle Database has a sophisticated self-management infrastructure that allows the database to learn about itself and use this information to adapt to workload variations or to automatically remedy any potential problem. The self-management infrastructure includes the following
Automatic Workload Repository (AWR) is a built-in repository in every Oracle Database. At regular intervals, the Oracle Database makes a snapshot of all its vital statistics and workload information and stores them in AWR. By default, the snapshots are made every 30 minutes, but you can change this frequency. The snapshots are stored in the AWR for a certain period of time (7 days by default) after which they are automatically purged.
The captured data allows both system level and user level analysis to be performed, again reducing the requirement to repeat the workload in order to diagnose problems.
Optimizations have been performed to ensure that the capture of data is performed efficiently to minimize overhead. One example of these optimizations is in the SQL statement capture. It maintains deltas of the data for SQL statements between snapshots. These let the Oracle Database capture only statements that have significantly impacted the load of the system since the previous snapshot in an efficient manner, rather than having to capture all statements that had performed above a threshold level of work since they first appeared in the system.
AWR forms the foundation for all self-management functionality of Oracle Database. It is the source of information that gives the Oracle Database an historical perspective on how it is being used and enables it to make decisions that are accurate and specifically tailored for each environment.
By analyzing the information stored in AWR, the database can identify the need to perform routine maintenance tasks, such as optimizer statistics refresh. The automated maintenance tasks infrastructure enables the Oracle Database to automatically perform such operations. It uses the Scheduler to run such tasks in a pre-defined "maintenance window".
By default, the maintenance window starts at 10 PM every night and lasts until 6 AM next morning and throughout the weekend. All attributes of the maintenance window are customizable, including start and end time, frequency, days of the week, and so on. Also, the impact of automated maintenance tasks on normal database operations can be limited by associating a Database Resource Manager resource plan to the maintenance window.
Optimizer statistics are automatically refreshed using the automatic maintenance task infrastructure.
For problems that cannot be resolved automatically and require administrators to be notified, such as running out of space, the Oracle Database provides server-generated alerts. The Oracle Database can monitor itself and send out alerts to notify you of any problem in an efficient and timely manner.
Monitoring activities take place as the database performs its regular operation. This ensures that the database is aware of problems the moment they arise. The alerts produced by the Oracle Database not only notify the problem, they also provide recommendations on how the reported problem can be resolved. This ensures quick problem resolution and helps prevent potential failures.
The Oracle Database includes a number of advisors for different sub-systems in the database to automatically determine how the operation of the corresponding subcomponents could be further optimized. The SQL Tuning and SQLAccess Advisor, for example, provide recommendations for running SQL statements faster. Memory advisors help size the various memory components without resorting to trial-and-error techniques. The Segment Advisor handles space-related issues, such as recommending wasted-space reclamation and analyzing growth trends, while the Undo Advisor guides you in sizing the undo tablespace correctly. The various advisors are discussed more throughout this chapter.
To ensure the consistency and uniformity in the way advisors function and allow them to interact with each other seamlessly, the Oracle Database includes an advisor framework. The advisor framework provides a consistent manner in which advisors are invoked and results are reported. Even though these advisors are primarily used by the database to optimize its own performance, they can be invoked by administrators to get more insight into the functioning of a particular subcomponent.
Building upon the data captured in AWR, the Automatic Database Diagnostic Monitor (ADDM) lets the Oracle Database diagnose its own performance and determine how identified problems could be resolved. ADDM runs automatically after each AWR statistics capture, making the performance diagnostic data readily available.
ADDM examines data captured in AWR and performs analysis to determine the major issues on the system on a proactive basis. In many cases, it recommends solutions and quantifies expected benefits. ADDM takes a holistic approach to the performance of the system, using time as a common currency between components. ADDM identifies those areas of the system that are consuming the most time. ADDM drills down to identify the root cause of problems, rather than just the symptoms, and reports the impact that the problem is having on the system overall. If a recommendation is made, it reports the benefits that can be expected in terms of time. The use of time throughout allows the impact of several problems or recommendations to be compared.
ADDM focuses on activities that the database is spending most time on and then drills down through a sophisticated problem classification tree. Some common problems detected by ADDM include the following:
CPU bottlenecks
Poor connection management
Excessive parsing
Lock contention
I/O capacity
Undersizing of Oracle memory structures; for example, PGA, buffer cache, log buffer
High load SQL statements
High PL/SQL and Java time
High checkpoint load and cause; for example, small log files, aggressive MTTR setting
RAC-specific issues
Besides reporting potential performance issues, ADDM also documents non-problem areas of the system. The subcomponents, such as I/O and memory, that are not significantly impacting system performance are pruned from the classification tree at an early stage and are listed so that you can quickly see that there is little to be gained by performing actions in those areas.
You no longer need to first collect huge volumes of diagnostic data and spend hours analyzing them in order to find out answers to performance issues. You can simply follow the recommendation made by ADDM with just a few mouse clicks.
The Oracle Database completely automates the SQL tuning process. ADDM identifies SQL statements consuming unusually high system resources and therefore causing performance problems. In addition, the top SQL statements in terms of CPU and shared memory consumption are automatically captured in AWR. Thus, the identification of high load SQL statements happens automatically in the Oracle Database and requires no intervention.
After identifying the top resource-consuming SQL statements, the Oracle Database can automatically analyze them and recommend solutions using the Automatic Tuning Optimizer. Automatic SQL Tuning is exposed with an advisor, called the SQL Tuning Advisor. The SQL Tuning Advisor takes one or more SQL statements as input and produces well-tuned plans along with tuning advice. You do not need to do anything other than invoke the SQL Tuning Advisor.
The solution comes right from the optimizer and not from external tools using pre-defined heuristics. This provides several advantages: a) the tuning is done by the system component that is ultimately responsible for the execution plans and SQL performance, b) the tuning process is fully cost-based, and it naturally accounts for any changes and enhancements done to the query optimizer, c) the tuning process considers the past execution statistics of a SQL statement and customizes the optimizer settings for that statement, and d) it collects auxiliary information in conjunction with the regular statistics based on what is considered useful by the query optimizer.
The recommendation of the Automatic Tuning Optimizer can fall into one of the following categories
Statistics Analysis: The Automatic Tuning Optimizer checks each query object for missing or stale statistics and makes recommendations to gather relevant statistics. It also collects auxiliary information to supply missing statistics or correct stale statistics in case recommendations are not implemented. Because the Oracle Database automatically gathers optimizer statistics, this should not be the problem unless the automatic statistics gathering functionality has been disabled.
SQL Profiling: The Automatic Tuning Optimizer verifies its own estimates and collects auxiliary information to remove estimation errors. It also collects auxiliary information in the form of customized optimizer settings (for example, first rows vs. all rows) based on past execution history of the SQL statement. It builds a SQL profile using the auxiliary information and makes a recommendation to create it. It then enables the query optimizer (under normal mode) to generate a well-tuned plan. The most powerful aspect of SQL profiles is that they enable tuning of queries without requiring any syntactical changes and thereby proving a unique database –resident solution to tune the SQL statements embedded in packaged applications.
Access Path Analysis: The Automatic Tuning Optimizer considers whether a new index can be used to significantly improve access to each table in the query and when appropriate makes recommendations to create such indexes.
SQL Structure Analysis: The Automatic Tuning Optimizer tries to identify SQL statements that lend themselves to bad plans and makes relevant suggestions to restructure them. The suggested restructuring can be syntactic as well as semantic changes to the SQL code.
Both access path and SQL structure analysis can be useful in tuning the performance of an application under development or a homegrown production application where the administrators and developers have access to application code.
The SQLAccess Advisor can automatically analyze the schema design for a given workload and recommend indexes and materialized views to create, retain, or drop as appropriate for the workload. For single statement scenarios, the advisor only recommends adjustments that affect the current statement. For complete business workloads, the advisor makes recommendations after considering the impact on the entire workload.
While generating recommendations, the SQLAccess Advisor considers the impact of adding new indexes and materialized views on data manipulation activities, such as insert, update, and delete, in addition to the performance improvement they are likely to provide for queries. The SQLAccess Advisor provides an easy to use interface and requires very little system knowledge. It can be run without affecting production systems, because the data can be gathered from the production system and taken to another machine where the SQLAccess Advisor can be run.
The System Global Area (SGA) is a shared memory region that contains data and control information for one Oracle instance. Automatic Shared Memory management automates the management of SGA used by an Oracle Database instance. Simply specify the total amount of SGA memory available to an instance with the parameter SGA_TARGET
. The Oracle Database then automatically distributes the available memory among various components as required.
Oracle provides dynamic memory management that allows for resizing of the Oracle shared memory components dynamically. It also provides for transparent management of working memory for SQL execution by self-tuning the initialization runtime parameters controlling allocation of private memory. This helps users on systems with a low number of users to reduce the time and effort required to tune memory parameters for their applications, such as data warehouse and reporting applications. On systems with a higher number of users, this also allows them to avoid memory tuning for individual workloads.
Oracle provides the following advisors to help size the memory allocation for optimal database performance.
The Shared Pool Advisor determines the optimal shared pool size by tracking its use by the library cache. The amount of memory available for the library cache can drastically affect the parse rate of an Oracle instance. The shared pool advisor statistics provide information about library cache memory, letting you predict how changes in the size of the shared pool can affect aging out of objects in the shared pool.
The Buffer Cache Advisor determines the optimal size of the buffer cache. When configuring a new instance, it is difficult to know the correct size for the buffer cache. Typically, you make a first estimate for the cache size, then run a representative workload on the instance and examines the relevant statistics to see whether the cache is under or over configured. A number of statistics can be used to examine buffer cache activity. These include the V$DB_CACHE_ADVICE
view and the buffer cache hit ratio.
The Program Global Area (PGA) Advisor tunes PGA memory allocated to individual server processes. Under automatic PGA memory management mode, Oracle honors the PGA_AGGREGATE_TARGET
limit by controlling dynamically the amount of PGA memory allotted to SQL database areas. At the same time, Oracle maximizes the performance of all the memory-intensive SQL operators by maximizing the number of database areas that are using an optimal amount of PGA memory (cache memory). The rest of the database areas are executed in one-pass mode, unless the PGA memory limit set by PGA_AGGREGATE_TARGET
is so low that multipass execution is required to reduce even more the consumption of PGA memory and honor the PGA target limit.
When configuring a new instance, it is difficult to know an appropriate setting for PGA_AGGREGATE_TARGET
. You can determine this setting in three stages:
Make a first estimate for PGA_AGGREGATE_TARGET
.
Run a representative workload on the instance and monitor performance using PGA statistics collected by Oracle to see whether the maximum PGA size is under configured or over configured.
Tune PGA_AGGREGATE_TARGET
using Oracle's PGA advice statistics.
When the Automatic Shared Memory Management is enabled, the most commonly configured components are sized automatically. These include the following:
Shared pool (for SQL and PL/SQL execution)
Java pool for (Java execution state)
Large pool (for large allocations such as RMAN backup buffers)
Buffer cache
There is no need to set the of size any of the above components explicitly, and by default the parameters for these components appear to have values of zero. Whenever a component needs memory, it can request that it be transferred from another component with the internal auto-tuning mechanism. This happens transparently without user-intervention.
The performance of each component is monitored by the Oracle instance. The instance uses internal views and statistics to determine how to optimally distribute memory among the automatically-sized components. Thus, as the workload changes, memory is redistributed to ensure optimal performance with the new workload. This algorithm tries to find the optimal distribution by taking into consideration long term and short terms trends.
You can exercise some control over the size of the auto-tuned components by specifying minimum values for each component. This can be useful in cases where you know that an application needs a minimum amount of memory in certain components to function properly.
The sizes of the automatically-tuned components are remembered across shutdowns if a server parameter file (SPFILE) is used. This means that the system picks up where it left off from the last shutdown.
The most significant benefit of using automatic SGA memory management is that the sizes of the different SGA components are flexible and adapt to the needs of a workload without requiring user intervention. Besides maximizing the use of available memory, Automatic Shared Memory Management can enhance workload performance. With manual configuration, it is possible that the compiled SQL statements will frequently age out of the shared pool because of its inadequate size. This manifests into frequent hard parses and reduced performance. However, when automatic management is enabled, the internal tuning algorithm monitors the performance of the workload and grows the shared pool if it determines that doing so will reduce the number of parses required. This provides enhanced performance, without requiring any additional resources or manual tuning effort.
The Oracle Database automatically manages its space consumption, sends alerts on potential space problems, and recommends possible solutions. Oracle features that help you to easily manage space include the following:
Earlier releases of Oracle used rollback segments to store undo. Space management for these rollback segments was complex. Automatic undo management eliminates the complexities of managing rollback segments and lets you exert control over how long undo is retained before being overwritten. Oracle strongly recommends that you use undo tablespace to manage undo rather than rollback segments.
The Undo Advisor improves manageability of transaction management, especially for automatic undo management. The Undo Advisor presents the best retention possible for the given undo tablespace. It also advises a size for the undo tablespace when you want to set undo retention to a particular value.
The Undo Advisor is based on system activity statistics, including the longest running query and undo generation rate. Advisor information includes the following:
Current undo retention
Current undo tablespace size
Longest query duration
Best undo retention possible
Undo tablespace size necessary for current undo retention
See Also:
|
With Oracle-managed files, you do not need to directly manage the files comprising an Oracle database. Oracle uses standard file system interfaces to create and delete files as needed. This automates the routine task of creation and deletion of database files.
Oracle allows for managing free space within a table with bitmaps, as well as traditional dictionary based space management. The bitmapped implementation eliminates much space-related tuning of tables, while providing improved performance during peak loads. Additionally, Oracle provides automatic extension of data files, so the files can grow automatically based on the amount of data in the files. Database administrators do not need to manually track and reorganize the space usage in all the database files.
Oracle Database introduces a non-intrusive and timely check for space utilization monitoring. It automatically monitors space utilization during normal space allocation and de-allocation operations and alerts you if the free space availability falls below the pre-defined thresholds. Space monitoring functionality is set up out of box, causes no performance impact, and is uniformly available across all tablespace types. Also, the same functionality is available both through Enterprise Manager as well as SQL. Because the monitoring is performed at the same time as space is allocated and freed up in the database, this guarantees immediate availability of space usage information whenever you need it.
Notification is performed using server-generated alerts. The alerts are triggered when certain space-related events occur in the database. For example, when the space usage threshold of a tablespace is crossed or when a resumable session encounters an out of space situation, then an alert is raised. An alert is sent instantaneously to take corrective measures. You may choose to get paged with the alert information and add space to the tablespace to allow the suspended operation to continue from where it left off.
The database comes with a default set of alert thresholds. You can override the default for a given tablespace or set a new default for the entire database through Enterprise Manager.
Space may get overallocated because of the difficulty to predict the space requirement of an object or the inability to predict the growth trend of an object. On tables that are heavily updated, the resulting segment may have a lot of internal fragmentation and maybe even row chaining. These issues can result in a wide variety of problems from poor performance to space wastage. The Oracle Database offers several features to address these challenges.
The Oracle Database can predict the size of a given table based on its structure and estimated number of rows. This is a powerful "what if" tool that allows estimation of the size of an object before it is created or rebuilt. If tablespaces have different extent management policies, then the tool will help decide the tablespace that will cause least internal fragmentation.
The growth trend report takes you to the next step of capacity planning – planning for growth. Most database systems grow over time. Planning for growth is an important aspect of provisioning resources. To aid this process, the Oracle Database tracks historical space utilization in the AWR and uses this information to predict the future resource requirements.
The Oracle Database provides in place reorganization of data for optimal space utilization by shrinking it. Shrinking of a segment makes unused space available to other segments in the tablespace and may improve the performance of queries and DML operations.
The segment shrink functionality both compacts the space used in a segment and then deallocates it from the segment. The deallocated space is returned to the tablespace and is available to other objects in the tablespace. Sparsely populated tables may cause a performance problem for full table scans. By performing shrink, data in the table is compacted and the high water mark of the segment is pushed down. This makes full table scans read less blocks run faster.
Segment shrink is an online operation – the table being shrunk is open to queries and DML while the segment is being shrunk. Additionally, segment shrink is performed in place. This is an advantage over online table redefinition for compaction and reclaiming space. You may schedule segment shrink for one or all the objects in the database as nightly jobs without requiring any additional space to be provided to the database.
Segment shrink works on heaps, IOTs, LOBs, materialized views, and indexes with row movement enabled in tablespaces with automatic segment space management. When segment shrink is performed on tables with indexes on them, the indexes are automatically maintained when rows are moved around for compaction. User-defined triggers are not fired, however, because compaction is a purely physical operation and does not impact the application.
Note: Segment shrink can be performed only on tables with row movement enabled. Applications that explicitly track rowids of objects cannot be shrunk, because the application tracks the physical location of rows in the objects. |
To easily identify candidate segments for shrinking, the Oracle Database includes the Segment Advisor. The Segment Advisor performs growth trend analysis on individual objects to determine if there will be any additional space left in the object in 7 days. It then uses the reclaim space target to select candidate objects to shrink. The Segment Advisor can be invoked in the comprehensive mode. In this mode, in addition to using the pre-computed statistics in the workload repository, the Segment Advisor performs sampling of the objects under consideration to refine the statistics for the objects. Although this operation is more resource intensive, it may be used to perform a more accurate analysis, when desired.
Automatic Storage Management provides a vertical integration of the file system and volume manager specifically built for the Oracle database files. ASM distributes I/O load across all available resource to optimize performance while removing the need for manual I/O tuning (spreading out the database files avoids hotspots). ASM helps you manage a dynamic database environment by letting you grow the database size without having to shutdown the database to adjust the storage allocation.
Automatic Storage Management lets you define a pool of storage (called a disk group) and then the Oracle kernel manages the file naming and placement of the database files on that pool of storage. You can change the storage allocation (adding or removing disks) with SQL statements (CREATE DISKGROUP
, ALTER DISKGROUP
, and DROP DISKGROUP
). You can also manage the disk groups with Enterprise Manager and the Database Configuration Assistant (DBCA).
The Oracle Database provides a simplified management interface for storage resources. Automatic Storage Management eliminates the need for manual I/O performance tuning. It virtualizes storage to a set of disk groups and provides redundancy options to enable a high level of protection. ASM facilitates non-intrusive storage configuration changes with automatic rebalancing. It spreads database files across all available storage to optimize performance and resource utilization. It is a capability that saves time by automating manual storage and thereby increasing the ability to manage larger databases and more of them with increased efficiency.
Oracle provides several features that help you to easily manage backup and recovery. These include the following:
Oracle Recovery Manager (RMAN) is a powerful tool that simplifies, automates, and improves the performance of backup and recovery operations. RMAN enables one time backup configuration, automatic management of backups and archived logs based on a user-specified recovery window, restartable backups and restores, and test restore/recovery. RMAN implements a recovery window to control when backups expire. This lets you establish a period of time during which it is possible to discover logical errors and fix the affected objects by doing a database or tablespace point-in-time recovery. RMAN also automatically expires backups that are no longer required to restore the database to a point-in-time within the recovery window. Control file autobackup also allows for restoring or recovering a database, even when a RMAN repository is not available.
DBCA can automatically schedule an on-disk backup procedure. All you do is specify the time window for the automatic backups to run. A unified storage location for all recovery related files and activities in an Oracle database, called the flash recovery area, can be defined with the initialization parameter DB_RECOVERY_FILE_DEST
. All files needed to completely recover a database from a media failure, such as control files, archived log files, Flashback logs, RMAN backups, and so on, are part of the flash recovery area.
Allocating sufficient space to the flash recovery area ensures faster, simpler, and automatic recovery of the Oracle database. Flash recovery actually manages the files stored in this location in an intelligent manner to maximize the space utilization and avoid out of space situations to the extent possible. Based on the specified RMAN retention policy, the flash recovery area automatically deletes obsolete backups and archive logs that are no longer required based on that configuration.
Incremental backups let you back up only the changed blocks since the previous backup. When the block change tracking feature is enabled, Oracle tracks the physical location of all database changes. RMAN automatically uses the change tracking file to determine which blocks need to be read during an incremental backup and directly accesses that block to back it up. It reduces the amount of time needed for daily backups, saves network bandwidth when backing up over a network, and reduces the backup file storage.
Incremental backups can be used for updating a previously made backup. With incrementally updated backups, you can merge the image copy of a datafile with a RMAN incremental backup, resulting in an updated backup that contains the changes captured by the incremental backup. This eliminates the requirement to make a whole database backup repeatedly. You can make a full database backup once for a given database and use incremental backups subsequently to keep the full back up updated. A backup strategy based on incrementally updated backups can help keep the time required for media recovery of your database to a minimum.
Oracle allows for better control over database downtime by letting you specify the mean time to recover (MTTR) from system failures in number of seconds. This, coupled with dynamic initialization parameters, helps improve database availability. After you set a time limit for how long a system failure recovery can take, Oracle automatically and transparently makes sure that the system can restart in that time frame, regardless of the application activity running on the system at the time of the failure. This provides the fastest possible up time after a system failure.
The smaller the online logfiles are, the more aggressively DBWRs do incremental checkpoints, which means more physical writes. This may adversely affect the runtime performance of the database. Furthermore, if you set FAST_START_MTTR_TARGET
, then the smallest logfile size may drive incremental checkpointing more aggressively than needed by the MTTR.
The Logfile Size Advisor determines the optimal smallest logfile size from the current FAST_START_MTTR_TARGET
setting and the MTTR statistics. A smallest logfile size is considered optimal if it does not drive incremental checkpointing more aggressively than needed by FAST_START_MTTR_TARGET
.
The MTTR Advisor helps you evaluate the effect of different MTTR settings on system performance in terms of extra physical writes. When MTTR advisor is enabled, after the system runs a typical workload, you can query V$MTTR_TARGET_ADVICE
to see the ratio of the estimated number of cache writes under other MTTR settings to the number of cache writes under the current MTTR. For instance, a ratio of 1.2 indicates 20% more cache writes.
By looking at the different MTTR settings and their corresponding cache write ratio, you can decide which MTTR value fits your recovery and performance needs. V$MTTR_TARGET_ADVICE
also gives the ratio on total physical writes, including direct writes, and the ratio on total I/O, including reads.
Oracle Flashback technology lets you view and rewind data back and forth in time. You can query past versions of schema objects, query historical data, perform change analysis, or perform self-service repair to recover from logical corruptions while the database is online.
This revolutionizes recovery by just operating on the changed data. The time it takes to recover the error is equal to the amount of time it took to make the mistake.
Enterprise Manager has several powerful configuration management facilities that help detect configuration changes and differences and enforce best practice configuration parameter settings. These capabilities also encompass the underlying hosts and operating systems.
Enterprise Manager continuously monitors the configuration of all Oracle systems for such things as best practice parameter settings, security set-up, storage and file space conditions, and recommended feature usage. Non-conforming systems are automatically flagged with a detailed explanation of the specific-system configuration issue. For example, Enterprise Manager advises you to use new functionality such automatic undo management or locally managed tablespaces if they are not being used. This automatic monitoring of system configurations promotes best practices configuration management, reduces administrator workload and the risk of availability, performance, or security compromises.
Enterprise Manager also automatically alerts you to new critical patches – such as important security patches – and flags all systems that require that patch. In addition, you can invoke the Enterprise Manager patch wizard to find out what interim patches are available for that installation.
Oracle provides the following resource management features:
The Database Resource Manager provides the ability to prioritize work within the Oracle system. High priority users get resources, so as to minimize response time for online workers, for example, while lower priority users, such as batch jobs or reports, could take longer. This allows for more granular control over resources and provides features such as automatic consumer group switching, maximum active sessions control, query execution time estimation and undo pool quotas for consumer groups. You can specify the maximum number of concurrently active sessions for each consumer group. When this limit is reached, the Database Resource Manager queues all subsequent requests and runs them only after existing active sessions complete.
The Database Resource Manager solves many resource allocation problems that an operating system does not manage so well:
Excessive overhead. This results from operating system context switching between Oracle database server processes when the number of server processes is high.
Inefficient scheduling. The operating system deschedules Oracle database servers while they hold latches, which is inefficient.
Inappropriate allocation of resources. The operating system distributes resources equally among all active processes and is unable to prioritize one task over another.
Inability to manage database-specific resources.
With Oracle's Database Resource Manager, you can:
Guarantee certain users a minimum amount of processing resources regardless of the load on the system and the number of users
Distribute available processing resources by allocating percentages of CPU time to different users and applications. In a data warehouse, a higher percentage may be given to ROLAP (relational on-line analytical processing) applications than to batch jobs.
Limit the degree of parallelism of any operation performed by members of a group of users
Create an active session pool. This pool consists of a specified maximum number of user sessions allowed to be concurrently active within a group of users. Additional sessions beyond the maximum are queued for execution, but you can specify a timeout period, after which queued jobs terminate.
Allow automatic switching of users from one group to another group based on administrator-defined criteria. If a member of a particular group of users creates a session that runs for longer than a specified amount of time, that session can be automatically switched to another group of users with different resource requirements.
Prevent the execution of operations that are estimated to run for a longer time than a predefined limit
Create an undo pool. This pool consists of the amount of undo space that can be consumed in by a group of users.
Configure an instance to use a particular method of allocating resources. You can dynamically change the method, for example, from a daytime setup to a nighttime setup, without having to shut down and restart the instance.
It is thus possible to balance one user's resource consumption against that of other users and to partition system resources among tasks of varying importance, to achieve overall enterprise goals.
See Also: Oracle Database Administrator's Guide for information about using the Database Resource Manager |
Resources are allocated to users according to a resource plan specified by the database administrator. The following terms are used in specifying a resource plan:
A resource plan specifies how the resources are to be distributed among various users (resource consumer groups).
Resource consumer groups let you group user sessions together by resource requirements. Resource consumer groups are different from user roles; one database user can have different sessions assigned to different resource consumer groups.
Resource allocation methods determine what policy to use when allocating for any particular resource. Resource allocation methods are used by resource plans and resource consumer groups.
Resource plan directives are a means of assigning consumer groups to particular plans and partitioning resources among consumer groups by specifying parameters for each resource allocation method.
The Database Resource Manager also allows for creation of plans within plans, called subplans. Subplans allow further subdivision of resources among different users of an application.
Levels provide a mechanism to specify distribution of unused resources among available users. Up to eight levels of resource allocation can be specified.
Services represent groups of applications with common attributes, service level thresholds, and priorities. Application functions can be divided into workloads identified by services. For example, the Oracle E*Business suite can define a service for each responsibility, such as general ledger, accounts receivable, order entry, and so on. Oracle Email Server can define services for iMAP processes, postman, garbage collector, monitors, and so on. A service can span one or more instances of an Oracle database or multiple databases in a global cluster, and a single instance can support multiple services.
The number of instances offering the service is transparent to the application. Services provide a single system image to manage competing applications, and they allow each workload to be managed as a single unit.
Middle tier applications and clients select a service by specifying the service name as part of the connection in the TNS connect data. For example, data sources for the Web server or the application server are set to route to a service. Using Net Easy*Connection, this connection includes the service name and network address. For example, service:IP.
Server side work, such as the Scheduler, parallel query, and Oracle Streams Advanced Queuing set the service name as part of the workload definition. For the Scheduler, jobs are assigned to job classes, and job classes run within services. For parallel query and parallel DML, the query coordinator connects to a service, and the parallel query slaves inherit the service for the duration of the query. For Oracle Streams Advanced Queuing, streams queues are accessed using services. Work running under a service inherits the thresholds and attributes for the service and is measured as part of the service.
The Database Resource Manager binds services to consumer groups and priorities. This lets services be managed in the database in the order of their importance. For example, you can define separate services for high priority online users and lower priority internal reporting applications. Likewise, you can define gold, silver, and bronze services to prioritize the order in which requests are serviced for the same application.
When planning the services for a system, include the priority of each service relative to the other services. In this way, the Database Resource Manager can satisfy the highest priority services first, followed by the next priority services, and so on.
The Automatic Workload Repository lets you analyze the performance of workloads using the aggregation dimension for service. The Automatic Workload Repository automatically maintains response time and CPU consumption metrics, performance and resource statistics wait events, threshold-based alerts, and performance indexes for all services.
Service, module, and action tags identify operations within a service at the server. (MODULE
and ACTION
are set by the application) End to end monitoring enables aggregation and tracing at service, module, and action levels to identify high load operations. Oracle Enterprise Manager administers the service quality thresholds for response time and CPU consumption, monitors the top services, and provides drill down to the top modules and top actions for each service.
With the Automatic Workload Repository, performance management by the service aggregation makes sense when monitoring by sessions may not. For example, in systems using connection pools or transaction processing monitors, the sessions are shared, making accountability difficult.
The service, module, and action tags provide major and minor boundaries to discriminate the work and the processing flow. This aggregation level lets you tune groups of SQL that run together (at service, module, and action levels). These statistics can be used to manage service quality, to assess resource consumption, to adjust priorities of services relative to other services, and to point to places where tuning is required. With Real Application Clusters (RAC), services can be provisioned on different instances based on their current performance.
Connect time routing and runtime routing algorithms balance the workload across the instances offering a service. The metrics for server-side connection load balancing are extended to include service performance. Connections are shared across instances according to the current service performance. Using service performance for load balancing accommodates nodes of different sizes and workloads with competing priorities. It also prevents sending work to nodes that are hung or failed.
The Automatic Workload Repository maintains metrics for service performance continuously. These metrics are available when routing runtime requests from mid-tier servers and TP monitors to RAC. For example, Oracle JDBC connection pools use the service data when routing the runtime requests to instances offering a service.
Real Application Clusters (RAC) use services to enable uninterrupted database operations. Services are tightly integrated with the Cluster Ready Services high availability framework that supports RAC. When a failure occurs, the service continues uninterrupted on the nodes and instances unaffected by the failure. Those elements of the services affected by the failure are recovered fast by Cluster Ready Services, and the recovering sessions are balanced across the surviving system automatically. For planned outages, RAC provides interfaces to relocate, disable, and enable services. Relocate migrates the service to another instance, and, as an option, the sessions are disconnected. To prevent the Cluster Ready Services system from responding to an unplanned failure that happens during maintenance or repair, the service is disabled on the node doing maintenance at the beginning of the planned outage. It is then enabled at the end of the outage. These service-based operations, in combination with schema pre-compilation (DBMS_SCHEMA_COPY
) on a service basis, minimize the downtime for many planned outages. For example, application upgrades, operating system upgrades, hardware upgrades and repairs, Oracle patches approved for rolling upgrade, and parameter changes can be implemented by isolating one or more services at a time.The continuous service built into RAC is extended to applications and mid-tier servers. When the state of a service changes, (for example, up, down, or not restarting), the new status is notified to interested subscribers through events and callouts. Applications can use this notification to achieve very fast detection of failures, balancing of connection pools following failures, and balancing of connection pools again when the failed components are repaired. For example, when the service at an instance starts, the event and callouts are used to immediately trigger work at the service. When the service at an instance stops, the event is used to interrupt applications using the service at that instance. Using the notification eliminates the client waiting on TCP timeouts. The events are integrated with Oracle JDBC connection pools and Transparent Application Failover (TAF).
With Oracle Data Guard, production services are offered at the production site. Other standby sites can offer reporting services when operating in read only mode. RAC and Data Guard Broker are integrated, so that when running failover, switchover, and protection mode changes, the production services are torn down at the original production site and built up at the new production site. There is a controlled change of command between Cluster Ready Services managing the services locally and Data Guard managing the transition. When the Data Guard transition is complete, Cluster Ready Services resumes management of the high availability operation automatically.
Today's large databases demand minimal scheduled downtime, and DBAs are often required to manage multiple databases with an increasing number of database files. Automatic Storage Management lets you be more productive by making some manual storage management tasks obsolete.
Automatic Storage Management is a vertical integration of both the file system and the volume manager built specifically for Oracle database files. It extends the concept of stripe and mirror everything (SAME) to optimize performance, while removing the need for manual I/O tuning (distributing the datafile layout to avoid hotspots). Automatic Storage Management helps manage a dynamic database environment by letting you grow the database size without shutting down the database to adjust the storage allocation. Automatic Storage Management also enables low cost modular storage to deliver higher performance and greater availability by supporting mirroring as well as striping.
Automatic Storage Management lets you create a pool of storage, and then it manages the file naming and placement of the database files on that storage. You can change the storage allocation (add to or remove disks) with the following SQL statements: CREATE
DISKGROUP
, ALTER
DISKGROUP
, and DROP
DISKGROUP
.
You can also manage the disk groups with Enterprise Manager. For creating and deleting files, the Oracle Database internally allocates storage space from these disk groups. As part of disk group setup, you can define failure groups as an additional level of redundancy to protect against loss of a subset of disks, such as a disk controller or disk array. Failure groups let Automatic Storage Management place a mirrored copy intelligently, by not putting copies of data on disks that would be inaccessible if a single component failed.
The Oracle Database provides a simplified management interface for storage resources. Automatic Storage Management eliminates the need for manual I/O performance tuning. It simplifies storage to a set of disk groups and provides redundancy options to enable a high level of protection. Automatic Storage Management facilitates non-intrusive storage allocations and provides automatic rebalancing. It spreads database files across all available storage to optimize performance and resource utilization. It also saves time by automating manual storage tasks, which thereby increases their ability to manage more and larger databases with increased efficiency.
This section defines some of the basic concepts with Automatic Storage Management.
A disk group is one or more Automatic Storage Management disks managed as a logical unit. The data structures in a disk group are self contained and consume some of the disk space in a disk group. Automatic Storage Management disks can be added or dropped from a disk group while the database is running. Automatic Storage Management rebalances the data to ensure an even I/O load to all disks in a disk group even as the disk group configuration changes.
Any single Automatic Storage Management file is contained in a single disk group. However, a disk group can contain files belonging to several databases, and a single database can use storage from multiple disk groups. One or more disk groups can be specified as the default disk group for files created in a database.
Disk groups can be created when creating a database or when a new application is developed. Disk groups can also change when its database server configuration is altered.
Most installations probably have two disk groups. Reasons for having different disk groups include the following:
To group disks of different sizes or performance characteristics together
To group disks with different external redundancy together; disks that have the same external redundancy could be in the same disk group, but disks that have different external redundancy should be in different disk groups.
To separate database areas and flash recovery areas for a database
There are three types of disk groups: normal redundancy, high redundancy, and external redundancy. With normal and high redundancy, Automatic Storage Management provides redundancy for all files in the disk group according to the attributes specified in the disk group templates. High redundancy provides a greater degree of protection. With external redundancy, Automatic Storage Management does not provide any redundancy for the disk group. The underlying disks in the disk group should provide redundancy (for example, using a storage array), or the user must be willing to tolerate loss of the disk group if a disk fails (for example, in a test environment).
When the database requests it, Automatic Storage Management creates files. Automatic Storage Management assigns each file a fully qualified name ending in a dotted pair of numbers. You can create more user-friendly alias names for the Automatic Storage Management filenames. To see alias names for Automatic Storage Management files, query the V$ASM_ALIAS
data dictionary view. V$ASM_ALIAS
can only be queried from an ASM instance. Users can find the names for ASM files by querying the appropriate V$
view in the database instance (V$DATAFILE
, V$LOGFILE
, V$CONTROLFILE
, and so on). In general, users need not be aware of file names.
All existing situations where a filename is required are augmented with a mechanism for recognizing Automatic Storage Management file naming syntax.
When a file is created, certain file attributes are permanently set. Among these are its protection policy (mirroring) and its striping policy. Automatic Storage Management files are not visible from the operating system or its utilities, but they are visible to database instances, RMAN, and other Oracle-supplied tools.
See Also: Oracle Database Administrator's Guide for information on Automatic Storage Management files and attributes |
Automatic Storage Management templates are collections of attributes used by Automatic Storage Management during file creation. Templates simplify file creation by mapping complex file attribute specifications into a single name. A default template exists for each Oracle file type. Each disk group contains its own definition of templates. Templates of the same name can exist in different disk groups with each having their own unique properties.
You can change the attributes of the default templates or add your own unique templates. This lets you specify the appropriate file creation attributes as a template. If you need to change an Automatic Storage Management file attribute after the file has been created, then the file must be copied using RMAN into a new file with the new attributes.
Storage is added and removed from disk groups in units of Automatic Storage Management (ASM) disks. ASM disks can be entire physical disks, LUNs from a storage array, or pre-created files in a NAS filer. ASM disks should be independent of each other to obtain optimal I/O performance. For instance, with a storage array, you might specify a LUN that represents a hardware mirrored pair of physical disks to ASM as a single ASM disk. If you specify two separate LUNs that are striped by a storage array across the same set of physical drives, then this may cause suboptimal performance.
If using an NAS filer, the files specified as ASM disks must be a multiple of 1 megabyte. For optimal performance, if NAS files are in the same disk group, then they should have independent underlying physical drives.
You should not specify a device that contains data that should not be overwritten. For instance, on some operating systems, certain partitions contain the partition table at the beginning of the partition. Such partitions should not be specified as ASM disks.
ASM performs I/O to ASM disks through a single logical path. Therefore, if you are using a multi-pathing driver with ASM, then you should specify the logical path to the ASM disk and ensure that the ASM discovery string (ASM_DISKSTRING initialization parameter) includes only a single logical path to each ASM disk.
In a cluster, ASM disks must be visible to all ASM instances in the cluster, but the path to the ASM disk need not be identical on each node, as long as each instance's ASM_DISKSTRING
includes the path to the disk for that instance.
A disk name is common to all nodes of the cluster. The name must be specified by the administrator, or it can be automatically generated by Automatic Storage Management when the disk is added to a disk group. The Automatic Storage Management disk name abstraction is required, because different hosts can use different operating system names to refer to the same disk.
Automatic Storage Management provides mirroring to reduce the chances of losing data due to disk failure. This is necessary, because the loss of the otherwise unduplicated data from a single Automatic Storage Management disk could damage every file in the disk group.
Failure groups are administratively assigned sets of disks sharing a common resource. Failure groups are used to determine which Automatic Storage Management disks to use for storing redundant copies of data. The use of failure groups ensures that data and the redundant copy of the data do not both reside on disks that are likely to fail together.
The composition of failure groups is site-specific. Failure group decisions are based on what component failures the system can tolerate. For example, suppose you have five disks and one SCSI controller. The failure of the SCSI controller makes all disks unavailable. In this scenario, you should put each disk in its own failure group. However, if you have two SCSI controllers, each with two disks, and you want to tolerate a SCSI controller failure, then you should create a failure group for each controller.
By default, Automatic Storage Management assigns each disk to its own failure group. When creating a disk group or adding a disk to a disk group, you can specify you own grouping of disks into failure groups. Automatic Storage Management can then optimize file layout to reduce the unavailability of data due to failures.
A failure group is maintained in a disk group, and multiple failure groups can exist within any given disk group. However, changing a disk's failure group requires dropping the disk from the disk group and then adding the disk back to the disk group under the new failure group name.
The Automatic Storage Management instance is a special Oracle instance that manages the disks in disk groups. The Automatic Storage Management instance must be configured and running for the database instance to access Automatic Storage Management files. This configuration is done automatically if the Database Configuration Assistant was used for database creation.
Automatic Storage Management instances cannot mount databases. The Automatic Storage Management instances simply coordinate data layout for database instances. Database instances do direct I/O to disks in disk groups without going through an Automatic Storage Management instance.
Multiple and separate database instances can share disk groups for their files. On a single node, there is typically a single Automatic Storage Management instance on the node, which manages all disk groups. In a Real Application Clusters environment, there is typically one Automatic Storage Management instance on each node managing all disk groups for its node in a coordinated manner with the rest of the cluster.
All Automatic Storage Management management commands must be directed to the Automatic Storage Management instance, and not to the Oracle database instance using the Automatic Storage Management files.
An Automatic Storage Management instance contains two new background processes. One coordinates rebalance activity for disk groups. It is called RBAL. The second one performs the actual rebalance data extent movements. There can be many of these at a time, and they are called ARB0, ARB1, and so forth. An Automatic Storage Management instance also has most of the same background processes as a database instance (SMON, PMON, LGWR, and so on).
A database instance using an Automatic Storage Management disk group contains a background process called ASMB that communicates with the Automatic Storage Management instance. Another background process called RBAL performs a global open on Automatic Storage Management disks.
Automatic Storage Management provides easier administration.
There is no need to specify and manage filenames. Wherever a file is created, a disk group can be specified instead. Every new file automatically gets a new unique name. This prevents using the same filename in two different databases. Disk group naming avoids using two different names for the same file.
For many situations, Automatic Storage Management provides the functions provided by external volume managers and file systems.
Automatic Storage Management includes storage reliability features, such as mirroring. The storage reliability policy is applied on a file basis, rather than on a volume basis. Hence, the same disk group can contain a combination of files protected by mirroring, parity, or not protected at all.
Automatic Storage Management improves performance.
Automatic Storage Management maximizes performance by automatically distributing database files across all disks in a disk group. It has the performance of raw disk I/O without the inconvenience of managing raw disks.
Unlike logical volume managers, Automatic Storage Management maintenance operations do not require that the database be shut down. This allows adding or dropping disks while the disks are in use.
Automatic Storage Management eliminates the need for manual disk tuning. To help manage performance, file creation attributes are controlled by disk group-specific templates.
See Also: Oracle Database Administrator's Guide for detailed information on using Automatic Storage Management |
To help simplify management tasks, as well as providing a rich set of functionality for complex scheduling needs, Oracle provides a collection of functions and procedures in the DBMS_SCHEDULER
package. Collectively, these functions are called the Scheduler, and they are callable from any PL/SQL program.
The Scheduler simplifies the scheduling and management of all background processes (jobs). The Scheduler ensures that jobs are processed at a pre-defined time in an efficient manner while maximizing the use of system resources. It lets you manage and monitor the jobs. Detailed information on all Scheduler activities is available, thus providing key information on resource utilization that can be used for planning for future capacity requirements.
The Scheduler supports various types of jobs, such as PL/SQL stored procedures and anonymous blocks, C functions, Java stored procedures, and operating system scripts. It also supports distributed database scheduling, thus enabling you to run and manage jobs on remote databases.
The Scheduler provides complex enterprise scheduling functionality. You can use this functionality to do the following:
The most basic capability of a job scheduler is the ability to schedule a job to run at a predetermined date and time. The Scheduler lets you specify when a job should run. You can schedule a job to run at a specified time (for example, Jan. 23rd 2003 at 1:00 AM), at any interval (for example, every day of a year), or a relative date, such as the last day of the month.
Tasks can be defined as a generic executable program that takes parameters so that it can be reused. You can create a program library where users can select from a list of predefined programs. This enables multiple jobs to point to the same program without having to redefine it.
Scheduler objects, such as schedules, can be saved in a library so that it can be shared with other users without having to specify it each time a job is created.
The Scheduler enables job processing in a way that models your business requirements. It enables limited computing resources to be allocated appropriately among competing jobs, thus aligning job processing with your business needs. Jobs that share common characteristic and behavior can be grouped into larger entities called job classes. You can prioritize among the classes by controlling the resources allocated to each class. This lets you ensure that critical jobs have priority and enough resources to complete. Jobs can also be prioritized within a job class.
The Scheduler also provides the ability to change the prioritization based on a schedule. Because the definition of a critical job can change across time, the Scheduler lets you define different class priorities at different times.
There are multiple states that a job undergoes from its creation to its completion. All Scheduler activity is logged, and information, such as the status of the job and the time to completion, can be easily tracked. This information is stored in views. It can be queried with Enterprise Manager or a SQL query. The views provide information about jobs and their execution that can help you schedule and manage your jobs better. For example, you can easily track all jobs that failed for user scott
.
A cluster is a set of database instances that cooperates to perform the same task. Oracle Real Application Clusters (RAC) provides scalability and reliability without any change to your applications. The Scheduler fully supports execution of jobs in such a clustered environment. To balance the load on your system and for better performance, you can also specify the service where you want a job to run.
You can use the DBMS_SCHEDULER
package to transfer files automatically within a single database and between databases. Third-party file transfers are also supported by the Scheduler. You can monitor a long-running file transfer done by the Scheduler using the V$SESSION_LONGOPS
dynamic performance view at the databases reading or writing the file. Any database links used by a Scheduler job must be fixed user database links.
See Also:
|
The Scheduler has the following basic concepts that pertain to creating jobs:
A program is a collection of metadata about what task the Scheduler will run. It is a generic task that is defined using parameters that can be specified at run time. It includes information such as the action that will be executed (for example, a procedure or executable name), the type of action that will be executed (for example, PL/SQL and Java stored procedures or operating system script), and the arguments required to perform the task. For example, a program that analyses tables in a database would take the table name as a parameter. The table name can be specified at run time.
A schedule indicates when a job should run. It has a start date that specifies the date and time when the schedule starts, an end date which indicates the date and time when the schedule expires, as well as a repeat interval that indicates how often it will repeat.
A job is a user-defined task that is scheduled to run one or more times. A job contains two pieces of information: what task (the action) needs to run, and when it should run (the schedule). The action and schedule can be specified when creating the job or alternatively can refer to an existing program and schedule.
The Scheduler has some advanced concepts that apply to processing of jobs. These concepts enable limited computing resources to be allocated appropriately among competing jobs, thus aligning job processing with your business needs. They are:
A job class is a group of jobs that have similar characteristics and behavior. A job class is a way of grouping jobs into larger entities thus enabling access to limited resources to be prioritized among the job classes.
A window is a time-duration for which certain resource allocations are specified. Like a schedule, it has a start date when the window becomes active or opens and a duration that specifies how long the window will be open for, as well as an end date that indicates when the window expires. It also can have a repeat interval that specifies how often the window will open. Generally, a resource plan is associated with a window, which specifies how resources should be allocated among the job classes, thus enabling you to define different class priorities at different times.