Sunday, November 18, 2012

Create Oracle password file

In Oracle there is two ways to authenticate database administrators. OS authentication or with Oracle password file. If you have set 'shared or 'exclusive' in remote_login_passwordfile init parameter then Oracle password file is used. 'exclusive' is the default value.
And the default place for password file is on unix:
$ORACLE_HOME/dbs/orapw$ORACLE_SID
and on windows:
%ORACLE_HOME%\database\PWD%ORACLE_SID%.ora


You can create oracle password file with orapwd utility (as oracle user and $ORACLE_SID and $ORACLE_HOME env parameters set):
orapwd file=orapwtestdb password=xxxxx

you can also use 'entries' option when you create paasword file. this options tells how many user can be added into the file:
orapwd file=orapwtestdb password=xxxxx entries=5


If you give user SYSDBA or SYSOPER privilege that user will be added into the password file automatically.

You can check password file users from sqlplus  v$pwfile_users  view:
select * from  v$pwfile_users;

Oracle Database Availability Changes.

There are several ways to affect Oracle database availability.

1. First when you start the database you can start it in mount or in nomount state. 
SQL> STARTUP NOMOUNT;
SQL> STARTUP MOUNT;

These states are used with administrating operations. For example with nomount state you can run restore jobs for controlfiles with RMAN and with mount state you can change database settings such as archiving. But with these states you database is answering only locally and only for certain commands.

For opening database to general use run following (this is same as: alter database open read write; ):
SQL> ALTER DATABASE OPEN;
If you wan't to open the database only READ access you can do it with this (this doesn't allow any writes):

SQL> ALTER DATABASE OPEN READ ONLY;



You can also use (RESTRICT can be also used with open, mount and nomount):

SQL> STARTUP RESTRICT;
With restricted mode only users with RESTRICTED SESSION system privilege can connect into database and only local connections are available. Only database administrators should have this privilege.



2. Second you can use logon and logoff triggers to make restrictions for user sessions connecting into database or disconnecting:
Remember that connection restriction made by these kind of trigger does not affect database administrator users unless you exclude ADMINISTER DATABASE TRIGGER privilege from that dba user.
Syntax goes like this:
#Logon trigger:
SQL> CREATE OR REPLACE TRIGGER <trigger_name> 
           AFTER LOGON ON <schema_name>
           BEGIN
               <trigger actions>
          END;

#Logoff trigger:
SQL> CREATE OR REPLACE TRIGGER <trigger_name> 
           BEFORE LOGOFF ON <schema_name>
           BEGIN
               <trigger actions>
          END;

More info can be find for example here:
http://docs.oracle.com/cd/B28359_01/appdev.111/b28843/tdddg_triggers.htm#TDDDG52900


3. Third you can use sqlnet.ora (this works on Oracle11g with older version you need to use 
protocol.ora config file) to make IP restriction for database connections:

Add following lines in sqlnet.ora file:
#This creates a hard failure when host name in exclude/invited list fail to resolve IP address:
tcp.validnode_checking = YES
#To specify which clients are denied access to the database:  
tcp.excluded_nodes = {list of IP_addresses}
#To specify which clients are allowed access to the database:
tcp.invited_nodes = {list of IP_addresses}

After you have done these changes you have to restart the listener:
lsnrctl status
lsnrctl stop
lsnrctl start
lsnrctl status



4. Fourth you can use your firewall settings (or iptables locally) to restrict database availability.

Tuesday, November 13, 2012

Oracle in vmware memory recommendations

There are several vmware documents that gives recommendations about memory usage with Oracle servers. Example: http://www.vmware.com/files/pdf/partners/oracle/Oracle_Databases_on_VMware_-_Best_Practices_Guide.pdf
And there are 2 main points in those documents.

1. Set server memory reservations equal to the size to the Oracle SGA. This is because it's avoid unnecessary  kernel swapping between ESX and quest OS.

2. Use large memory pages. This is because it's improve Oracle databases performance in vmware.


Large Memory Pages usage in Linux:

This is what you need to do to start using large memory pages in linux:

1. Add memlock user limits in: /etc/security/limits.conf
Set the value (in KB) slightly smaller than installed RAM.
And if you already have these values in limits.conf then check that they are correctly set.
    #HugePages   
    *   soft   memlock    6291456
    *   hard   memlock    6291456

2. Logon to the Oracle product owner (usually oracle) and check the memlock limit
    $ ulimit -l
     60397977


3. With Large Memory Pages ( HugePages ) you have to use ASMM (Automatic Shared Memory Management) because AMM ( Automatic Memory Management ) is not working with HugePages.

So when you creating new database you have to choose ASMM and set PGA Max and SGA Max (and these combained max should be same as limits.conf memlock setting). You can also change AMM to ASMM after database installation but it is easier to do it when you install the database.



4. After you have started to use your database and there is data inside it you have to run hugepages_settings.sh script. This script calculate a recommended value for the vm.nr_hugepages parameter.
You can get this script from My Oracle Support:

Shell Script to Calculate Values Recommended Huge Pages/Huge TLB Configuration ( MOS ID 401749.1 )
    $ ./hugepages_settings.sh
    ...
    Recommended setting: vm.nr_hugepages = 1496
    $


5. Add vm.nr_hugepages and vm.hugetlb_shm_group  parameters in /etc/sysctl.conf file. vm.hugetlb_shm_group parameter value is the dba group id )


6. Stop all the database instances and reboot the server.
After reboot check that values are correct:
grep HugePages /proc/meminfo
HugePages_Total:    1496
HugePages_Free:      485
HugePages_Rsvd:      446
HugePages_Surp:        0

To make sure that the configuration is valid, the HugePages_Free value should be smaller
than HugePages_Total and there should be some HugePages_Rsvd.
The sum of Hugepages_Free and HugePages_Rsvd may be smaller than your total combined SGA as instances allocate pages dynamically and proactively as needed.



7. Remember that if you change following you have to check your recommended hugepages setting again with the script:
    Amount of RAM installed for the Linux OS changed
    New database instance(s) introduced
    SGA size / configuration changed for one or more database instances




There is document in My Oracle Support (MOS) how to use Large Memory Pages in Linux:
HugePages on Oracle Linux 64-bit (MOS ID 361468.1)
( https://support.oracle.com/CSP/ui/flash.html#tab=KBHome%28page=KBHome&id=%28%29%29,%28page=KBNavigator&id=%28bmDocDsrc=KB&bmDocID=361468.1&bmDocType=REFERENCE&viewingMode=1143&from=BOOKMARK&bmDocTitle=HugePages%20on%20Oracle%20Linux%2064-bit%29%29 )

Wednesday, October 31, 2012

Oracle 11.2g Time Drift problems in alert log and trace files.

If you get a lot of these in alert log:
Time drift detected. Please check VKTM trace file for more details.

And <instance_name>_vktm_*.trc files are growing really fast in trace folder. You have probably hit
Bug 9843304 .

NOTE! You can get this lines in alert log sometimes if your servers clock are drifting (may happen for example in vmware). And that is normal. But if you get this all the time (several days) and a lot then it is probably because of this bug.

To fix this bug install patch 9843304 (You can get it from the MOS (My Oracle Support)).
And after that you can disable those debugging info that are added in logs.
Run following from the sqlplus (and restart database after that):
alter system set event="10795 trace name context forever, level 2" scope=spfile;

Read more from MOS document:
VKTM Trace Files Generated With KSTMCHKDRIFT "BACKWARD DRIFT ENDED AT" [ID 1185093.1]

Monday, October 29, 2012

Oracle 11.2.0.1 ORA-15061: ASM operation not supported [41]

You might get following error with 11gR1 or 11gR2 when you are trying to resize datafile:

Failed to commit: ORA-01237: cannot extend datafile 37 ORA-01110: data file 37: '+TESTDB_DATA0/testdb/datafile/testdata_01.dbf' ORA-17505: ksfdrsz:1 Failed to resize file to size 219136 blocks ORA-15061: ASM operation not supported [41]


a) Usually this is because you have separe homes for Oracle RDBMS and Grid Infra (clusterware (ASM uses this)). And you have different PSU upgrades installed in these Homes. You can check this with opatch. Run this in both homes and check that they got same PSU's installed:

opatch lsinventory -oh RDBMS_HOME
opatch lsinventory -oh GRID_HOME


More info about this can be seen in MOS document:
ORA-15061 reported while doing a file operation with 11.1 or 11.2 ASM after PSU applied in database home [ID 1070880.1]



b) But if there are same PSU installed in both Homes then problem is probably with relinking.
Patch (for example PSU) is not correctly relinked or there is OS update installed and it's have effected Grid Infra Home files so that they need to be relinked.

If you have Oracle RAC do following for all nodes one by one:

1) Stop first databases for the instance you are relinking then stop crs:
# GRID_HOME/bin/crsctl stop crs

2) Run following as root user:
# cd GRID_HOME/crs/install
# perl rootcrs.pl -unlock

3) Run following as the Oracle Grid Infrastructure for a Cluster owner:
$> export ORACLE_HOME=GRID_HOME
$> GRID_HOME/bin/relink

4) And run following as root user:
# cd GRID_HOME/rdbms/install/
# ./rootadd_rdbms.sh
# cd GRID_HOME/crs/install
# perl rootcrs.pl -patch

Then check that crs is up and running:# GRID_HOME/bin/crsctl check crs
After crs is started start databases and check that you can make resize operations again.

More info about this can be seen in MOS document:
ORA-15061: ASM Operation Not Supported [41] After Apply PSU #1 & PSU #2 [ID 1126113.1]

Oracle 11.2.0.3 SCAN and VIP IPs are not reachable from other subnets.

There is Linux bug in Oracle 11.2.0.3 concerning SCAN and VIP IPs. When you reboot node from RAC or there is failover where these IP's are moving between nodes then the ip address is
not pingable from a different subnet. This seems to be only Linux problem. And everything is working normally in same cluster and same subnet servers. Problem is that ARP table (on router or on firewall) is getting wrong MAC addresses for these IPs.

When this problem is on and you try to connect this database from server which is in other subnet you get:
ERROR:
ORA-12170: TNS:Connect timeout occurred


As a workaround you can run following:
 After vip failover, run command
   /sbin/arping -U -c 3 -I <public NIC for vip> <vip ip address>
to update the ARP table of router.
or you can also clean problematic IPs from ARP table on your router/firewall



There is bug report and fix for this problem in MOS (My Oracle Support):
Bug 13440962 - Different subnet failed to connect to vip after restart vip
[ID 13440962.8]

You can find the fix from MOS when you Search from the Patches & Updates with patch number: 13440962

When you are installing this patch you might get errors from file/directory permissions. I needed to give read/write permission for oracle user in following directories (and their files):
$GRID_HOME/lib
$GRID_HOME/jlib
$GRID_HOME/crs
$GRID_HOME/bin

And remember to check that under bin directory following right are set (this needs to be exactly like this. If these are not correct database won't start at all.):
chmod 6751 oracle
chmod 4750 jssu

and I also needed to create this directory (for oracle user):
$GRID_HOME/.patch_storage

This patch like every other patches should be tested first in test environment.

Sunday, October 14, 2012

Oracle 11g GridInfra ocr backup files cleaning/rotating problem

Sometimes Oracle automatic ocr backup cleaning is not working and because of this your $ORA_CRS_HOME/cdata/<cluster_name>  directory is filled by .ocr backup files.

Cause of this problem is regular backup files ownership. Check following files:
---
-rw-r--r-- 1 oracle root 6557696 Jun 6 18:07 week.ocr 
-rw-r--r-- 1 oracle root 6569984 Jun 13 18:07 week_.ocr
-rw-r--r-- 1 oracle root 6569984 Jun 16 18:07 day.ocr
-rw-r--r-- 1 oracle root 6569984 Jun 17 18:07 day_.ocr
-rw-r--r-- 1 oracle root 6569984 Jun 18 06:07 backup02.ocr
-rw-r--r-- 1 oracle root 6569984 Jun 18 10:07 backup01.ocr
-rw-r--r-- 1 oracle root 6569984 Jun 18 14:07 backup00.ocr
---

These files ownership should be root:root

After you change ownership the new backup files are rotating correctly.
And then you can safely delete older files (which are starting with generic numbers). But don't delete those regular backup files  (week*, day*, backup*) they are used for rotating ocr backups.

There is also MOS document about this problem:
OCR Backup Filling Up $ORA_CRS_HOME/CDATA/<CLUSTER_NAME> Directory [ID 741271.1]

Sunday, October 7, 2012

Oracle database 12c is coming

Several sites have said that new Oracle database 12c (c is for cloud) will be launch this years december or start of the next year. This info is based of Larry Ellison's conference call about financial results for the first quarter of fiscal 2013.

More info from these sites:
http://www.channelregister.co.uk/2012/09/20/oracle_openworld_preview_q1_f2012_numbers/
http://www.informationweek.com/software/information-management/oracle-12c-database-open-worlds-centerpi/240008073

This new database have cloud support (Oracle released it's own cloud in OracleWorld 2012). And it is supporting virtualization better than before. For example you can use one database for several customers/services data securely. So that data is isolated and private for each customer/service. It is interesting to see how this is affecting to licenses.

But we still have to wait for final release date and release notes.

Tuesday, September 25, 2012

MySQL server sql tuning.

1. When you need to tune MySQL sql clauses first thing to do is set slow query and general query logs on:

Slow query log shows queries that take more than 'long_query_time' parameters (you can change this also) time to execute. And General query log contains all SQL statements received from clients.
But remember that these log will generate overhead so it is best to do this kind of tuning in test enviroment.

You can set slow query log on/off from my.cnf init file (or with starting server with these parameters) with slow_query_log[={0|1}] and optionally set the path and filename with slow_query_log_file=<path_and_log_name>  .
Mysql Servers older than 5.1.29 parameter name is 'log-slow-queries' for path and file name.

General query log is set with parameters on my.cnf (or with starting server with these parameters) general_log[={0|1}] to enable or disable, And optionally path and filename with  general_log_file=<path_and_log_name>.
Mysql Servers older than 5.1.29 parameter name is 'log' for path and file name.

After changes in my.cnf restart the MySQL server.

And  from the 5.1 version you can change these logs on the fly with these commands:

mysql> SET GLOBAL slow_query_log = ON;       (or set global slow_query_log=1; )
mysql> SET GLOBAL slow_query_log = OFF;      (or set global slow_query_log=0;)

mysql> SET GLOBAL general_log = ON;              (or set global general_log=1; )
mysql> SET GLOBAL general_log = OFF;            (or set global general_log=0;)

And logpaths can be seen from (check: log_output if this is file then check: general_log_file, slow_query_log_file) :
mysql> show variables;  



After you have set logs on you run normal load into database and check the logs. From slow log you see which SQL statements take's the most time to run. And then you start to tune those statements that are taking most time and are runned most often.

You can use mysqldumpslow command to summarize the queries that appear in the slow queries log. This will make it easier to find problematic statements (it will count same statements etc...).
mysqldumpslow [options] <path_and_slow_log_filename>

More info about mysqldumpslow:
http://dev.mysql.com/doc/refman/5.1/en/mysqldumpslow.html




2. Tuning single SQL clauses with explain and sql profiler:

When you want to tune single problematic SQL clause first start sql profiler (this should be working from 5.0.37 and newer versions MySQL Servers.):

#Turn sql profiler on:
mysql> SET @@profiling = 1;
#To see parameter value:
mysql> SELECT @@profiling;
#Turn sql profiler off:
mysql> SET @@profiling = 0;

#After you have set sql profiler on. Run the problematic sql with explain:
#For example (this is just an example not real slow query):
mysql> explain SELECT * FROM INFORMATION_SCHEMA.TABLES ;
+----+-------------+--------+------+---------------+------+---------+------+------+----------------------------------------+
| id | select_type | table  | type | possible_keys | key  | key_len | ref  | rows | Extra                                  |
+----+-------------+--------+------+---------------+------+---------+------+------+----------------------------------------+
|  1 | SIMPLE      | TABLES | ALL  | NULL          | NULL | NULL    | NULL | NULL | Open_full_table; Scanned all databases |
+----+-------------+--------+------+---------------+------+---------+------+------+----------------------------------------+
1 row in set (0.00 sec)
----------
'possible_keys' value tell's you what kind of keys/indexes query is trying to use. And 'Extra' tell's you how MySQL is trying to resolve the query (sort the data with where, index etc..). Explain tell's you also execution time but it is better to use profiler for time comparison when tuning queries.

#Check problematic sql profile number for more info (it is the last Query_ID in profiles) (this shows the last 15 queries default but you can change that value from profiling_history_size parameter. ) :
mysql> show profiles ;
+----------+------------+-------------------------------------------------+
| Query_ID | Duration   | Query                                           |
+----------+------------+-------------------------------------------------+
|        1 | 0.00038300 | select @@profiling                              |
|        2 | 0.00084500 | show databases                                  |
|        3 | 0.00021600 | show profile for query2                         |
|        4 | 0.15603600 | SELECT * FROM INFORMATION_SCHEMA.TABLES         |
|        5 | 0.00068200 | explain SELECT * FROM INFORMATION_SCHEMA.TABLES |
+----------+------------+-------------------------------------------------+
5 rows in set (0.00 sec)
#Check problematic sql profile (this shows you steps that are made when query is running and also time for each step.) (You can also give options for 'show profile' command (CPU, MEMORY,SWAP... ) read more from here: http://dev.mysql.com/doc/refman/5.1/en/show-profile.html ):
mysql> show profile for query 5 ;
+--------------------+----------+
| Status             | Duration |
+--------------------+----------+
| starting           | 0.000197 |
| Opening tables     | 0.000099 |
| System lock        | 0.000016 |
| Table lock         | 0.000050 |
| init               | 0.000052 |
| optimizing         | 0.000020 |
| statistics         | 0.000026 |
| preparing          | 0.000024 |
| executing          | 0.000038 |
| end                | 0.000022 |
| query end          | 0.000015 |
| freeing items      | 0.000038 |
| removing tmp table | 0.000020 |
| closing tables     | 0.000034 |
| logging slow query | 0.000015 |
| cleaning up        | 0.000016 |
+--------------------+----------+
16 rows in set (0.00 sec)

#Now that you have better info what the query is doing and what is the most time consuming step (is it using correct index? Or can you change the query itself. etc...). You need to tune it and then run explain/profile steps again to see if you get improment for the query (compare first and new profile and explain plan).

#Usually making correct index is easiest solution for basic query problems. So you create new index and check if query is starting to use it and if it is doing improment for performance. (Sometimes you need to drop old indexes and/or use optimizer hint for queries to get them use correct indexes. More about index hints: http://dev.mysql.com/doc/refman/4.1/en/index-hints.html ):

mysql> create index...

mysql> explain select ...

mysql> show profiles ;

mysql> show profile for query 6 ;
.
.
.


Remember also that if you are tuning 'group by' clauses there is certain limits with those ( read more here: http://dev.mysql.com/doc/refman/5.1/en/group-by-optimization.html ). And usually big index which contains all or most of the columns of query are more efficient (but also takes more time to update). And remember to drop all unnecessary indexes.

Wednesday, September 19, 2012

Oracle Database and files corruption checks.



1. Corruption check with RMAN:

With RMAN you can check physical and logical corruptions.

#To check only physical corruptions:
RMAN> BACKUP VALIDATE DATABASE ARCHIVELOG ALL;

#To check both logical and physical corruptions:
 RMAN> BACKUP VALIDATE CHECK LOGICAL DATABASE ARCHIVELOG ALL; 

NOTE! BACKUP VALIDATE does not take real backup it is just checking the files. But you can  use CHECK LOGICAL when you are backuping your database. You can also check backups before restoring them with RESTORE DATABASE VALIDATE or RESTORE ARCHIVELOG ALL VALIDATE;


With RMAN you can also use VALIDATE command without BACKUP. And this gives your more options  for checking and this can also check control and sp files.:

#These check both logical and physical corruptions.
VALIDATE DATABASE;
VALIDATE BACKUPSET <backupset number>;
VALIDATE DATAFILE <datafile_number> BLOCK <block_number>;



2. Corruption check with dbv (DBVERIFY utility):

There is also dbv tool in Oracle server home directory bin folder. And with it you can also check datafile and segment corruptions. dbv finds both physical and logical corruptions.
#Example (there is more options but with this you can get started. Remember that blocksize can vary between databases. You can check it from dba_tablespaces view. (HELP gives all options)):
dbv  file=<filename> blocksize=<blocksize> logfile=<filename>
dbv file=system.dbf blocksize=8192 logfile=corrupt_check.log

And of course you can make for example a shell script where you run this check for all datafiles.

You can use dbv to check certain segments:
dbv segment_id=<segment_id>  logfile=corrupt_check.log


NOTE! Whatever of these tools you are using to check corruptions all findings are added in the 
V$DATABASE_BLOCK_CORRUPTION view.

Saturday, September 15, 2012

Oracle Controlfile move or rename.

If you need to move/rename Oracle controlfile this is how to do it:
#Check current controlfiles from sqlplus (as oracle user in database server):
sqlplus / as sysdba
SQL>  select name from v$controlfile;

NAME
------------------------------------------------------------------------------------------

/data/oradata/testdb/CONTROL01.CTL
/arch/oradata/testdb/CONTROL02.CTL
/redo/oradata/testdb/CONTROL03.CTL



#Check parameter which set's the controlfiles:
SQL> show parameter control_files

NAME                                                             TYPE                                                         VALUE
-----------------------------------------------------  -------------------------------------------------  -----------------
----------------------------
control_files                                                    string                                                           /data/oradata/
testdb/CONTROL01.CTL,  /arch/oradata/testd
b/CONTROL02.CTL, /redo/oradat
a/testdb/CONTROL03.CTL



# Change control_files value in SPFILE (I change CONTROL03.CTL to CONTROL03_NEW.CTL):
SQL> alter system set control_files='/data/oradata/testdb/CONTROL01.CTL', '/arch/oradata/testdb/CONTROL02.CTL', '/redo/oradata/testdb/CONTROL03_NEW.CTL' SCOPE=SPFILE;

#Shutdown database changes take effect after you restart the database:
SQL> shutdown immediate;
SQL> exit


#Make same change from OS side (as oracle user):
In unix:
mv /redo/oradata/testdb/CONTROL03.CTL /redo/oradata/testdb/CONTROL03_NEW.CTL
In Windows you can do this change from Windows explorer.


#Restart database from sqlplus (as oracle user):
sqlplus / as sysdba
SQL> startup


#Check that controlfiles changes are ok:
SQL> select name from v$controlfile;

NAME
------------------------------------------------------------------------------------------

/data/oradata/testdb/CONTROL01.CTL
/arch/oradata/testdb/CONTROL02.CTL
/redo/oradata/testdb/CONTROL03_NEW.CTL

Wednesday, September 12, 2012

Oracle Replications version support

Oracle have several ways to do data replication:
Basic Replication (materialized views),
Advanced Replication (materialized views),
Data Guard,
Streams.

And of course there is also differences between between EE edition and SE edition which replication (and how) you can use:


Basic Replication support both EE and SE


Advanced Replication is EE only.


Data Guard is EE only. (but you can create your on archivelog synchronise scripts for SE standby's)


Streams can be used in SE but only with synchronous capture (this capture only DML and is recommended for only a couple of tables.). If you plan to use Streams for many tables, entire schema or database then you should use capture process which is EE only.


If you decide to do data replication yourself for example with triggers and database link. Then those are available for all editions.

Thursday, September 6, 2012

Oracle database version check.

There is several ways to check your Oracle database version.

1.Oracle Universal Installer shows you all installed products.
Unix run runInstaller or in Windows run setup.
In both OS this file can be find in $ORACLE_HOME/oui/bin .



2. But easier way is to just connect into database with sqlplus like this (as oracle user):
sqlplus system@test

#First you see the sqlplus version:
SQL*Plus: Release 11.2.0.1.0 Production on Wed Sep 5 08:08:53 2012

Copyright (c) 1982, 2009, Oracle. All rights reserved.

#Then you have to give password:
Enter password:

#And after that you see database server version of the database your connected:
Connected to:
Oracle Database 11g Release 11.2.0.3.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL>



3. When you are connected into database via sqlplus you can also check v$version view. This view shows more detailed version information of current database server:
SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE 11.2.0.3.0 Production
TNS for Linux: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production



NOTE! If you have several server installations or server and client installations on same server then you should use full paths with these commands. Otherwise you can get mixed information with these commands. In sqlplus example I used different version client and database server and that is why sqlplus version and database server versions are different.

Sunday, August 26, 2012

Oracle RAC next challenger.

EnterpriseDB have released beta version of their Postgres Plus xDB Replication Server with Multi-Master . Oracle RAC have been best and only real multimaster solutions for several years. A lot of other multi-master systems have been trying to challenge Oracle RAC but none of them succeeded. Now EnterpriseDB is trying to do it with their new multi-master replication server.

Here you can find more information about this new product:
http://www.enterprisedb.com/multi-master-replication

Have to test this product before give any recommendations about it. But it looks interesting.

Sunday, August 19, 2012

Oracle11g AMM Monitoring and Tuning

Monitoring AMM:

From dynamic view V$MEMORY_DYNAMIC_COMPONENTS shows the current sizes of all dynamically tuned memory components, including the total sizes of the SGA and instance PGA.
SQL> select * from v$memory_dynamic_components order by component;


Tuning AMM:

From view V$MEMORY_TARGET_ADVICE you can see tuning advice for the MEMORY_TARGET initialization parameter. If you use Enterprise Manager (or Grid Control) you can also use graphical Memory Advisor.

SQL> select * from v$memory_target_advice order by memory_size;
V$MEMORY_TARGET_ADVICE view contains following columns:
MEMORY_SIZE (alternative MEMORY_TARGET sizes), MEMORY_SIZE_FACTOR (value 1 shows current memory size), ESTD_DB_TIME (estimated DB time to complete the current workload), ESTD_DB_TIME_FACTOR (factor for estimated DB time) and VERSION.

So from this view you can check how db_time is changing if you make changes in MEMORY_TARGET-


From V$MEMORY_RESIZE_OPS view you can see circular history buffer of the last 800 SGA resize requests.

Wednesday, August 15, 2012

Oracle 11g Automatic Memory Management (AMM) size changing.

Oracle 11g introduced new memory management parameters
MEMORY_MAX_TARGET and MEMORY_TARGET .
MEMORY_MAX_TARGET defines maximum size for database memory. And if you need to change it you'll need to restart database instance. And MEMORY_TARGET defines target memory size for database. This can be changed dynamically with no need for restart of instance. But MEMORY_TARGET can't be bigger than MEMORY_MAX_TARGET.
With these parameters set Oracle will automaticly tune PGA and SGA sizes as it needs.
With Linux (RHEL) you need first set tmpfs /dev/shm as big as you wan't to change your MEMORY_MAX_TARGET. And to do this you have to umount /dev/shm first then change it size (do this also in fstab) and then change the parameters in database.  
You can still use old ASMM and other memory management variants with Oracle 11g but usually AMM is good enough.
To change AMM parameters:

Go into the database with sqlplus (as sysdba):
This shows current memory target parameters:
SQL> show parameter target

These changes both AMM parameters:
SQL> alter system set MEMORY_MAX_TARGET = 10G SCOPE = SPFILE;
SQL> alter system set MAX_TARGET = 10G SCOPE = SPFILE;
SQL> shutdown immediate;
With Linux change /dev/shm bigger (read more above) in this step.
SQL> startup
And check that change was succesfull:
SQL> show parameter target

And if you already have big enough MEMORY_MAX_TARGET and you only need to change MAX_TARGET bigger you can do it with this on fly:
SQL> alter system set MAX_TARGET = 10G SCOPE = BOTH;

Wednesday, August 8, 2012

Archivelog destination change on fly

Sometimes you need to change your archivelog destination. You might need bigger disk etc.
Here is how you can do that.

After you have added new disk into OS (remember that with RAC you need to add same disk into all nodes) and in Oracle ASM ( if you use ASM) you need to do following:


1. Check which archivelog destination you are using now:

 Go into sqlplus (as sysdba):

(This will show you which parameter or parameters you have used with archivelog destination.)
(There can be several destinations with some configurations but usually only one.)
(And if there is now destination paths in these parameters then you have used "Flash Recovery Area" for archivelog destination.) 
SQL> show parameter log_archive_dest

Check also this (you might use same disk for flash recovery area or your archivelog destination might be in the flash recovery area). :
(This will show flash revovery area and flash recovery size.)
SQL> show parameter db_recovery

You can also use following command to see archivelog destination. But if you have multiple destinations this will show only one at the time:
SQL> archive log list

2. Change archivelog destination to new disk on fly:

If you are using some log_archive_dest parameter then you change it this way from sqlplus (as sysdba):
SQL> alter system set log_archive_dest_1 = 'LOCATION=/data/oradata/test/arch' scope=both;
If you are using same old disk for flash recovery are too change this also:
SQL> alter system set db_recovery_file_dest = ‘/data/oradata/test/flash’ scope=both;
If you are using flash recovery area for archive log destination you need to do following:
If you use ASM then add new bigger disk to ASM diskgroup where your flash recovery area is (check "db_recovery_file_dest" parameter (look above)) and start use it with changing db_recovery_file_dest_size parameter to bigger:
SQL>  alter system set db_recovery_file_dest_size=<new_size> scope=both;

And if you are not using ASM then you need to change "db_recovery_file_dest" parameter to point into new disk (like with "log_archive_dest" parameters):
SQL>  alter system set db_recovery_file_dest='<path>' scope=both;

3. Check that new destination is working as it should:

Check that everything is working ok:
SQL>  archive log list
SQL>  alter system archive log current;

If this does not give you errors you can remove old disk from OS (and from ASM if you are using it.).
 

Saturday, August 4, 2012

Oracle 11.2.0.3 RHEL6 and VMware multicast problems in prerequisite check

When you install Oracle 11.2.0.3 on RHEL6 and VMware you might face multicast test fails in Prerequisite Checks step in Grid Infrastructure installation and in Database software installation.

These fails comes because cluvfy is running prerequisite tests for all ports and if your firewall is not allowing all of them then it gives those fails. But you can download mcasttest.pl script from MOS (My Oracle Support). Search it on the MOS with the script name. Correct MOS document is ". This document gives also hints how to run the script. Document speaks about 11.2.0.2 version but you can use same script with 11.2.0.3 version checking. This script can be used to test multicast with the correct ports and if it goes throught then you can safely ignore multicast fails in prerequisite check.

for example:
perl mcasttest.pl -n node1_name,node2_name -i eth1

If you want to test your firewall settings after installation you can run cluvfy check from command line this way (as oracle user):
/app00/app/11.2.0/grid/bin/cluvfy stage -post hwos -n node1_name,node2_name -verbose 

Multicast check lines can be find end of the log file just like in the installation logs (grep Checking multicast communication  )

Tuesday, July 31, 2012

Moving or renaming Oracle datafiles.

If you need to move or rename Oracle datafiles you can do it two different ways.

You can use alter database or alter tablespace clause to move or rename datafiles.

With alter database you need to shutdown the database instance to make needed changes.

And with alter tablespace clause without need to shutdown database instance. But you can't use alter tablespace for datafiles moving/renaming with SYSTEM, UNDO or TEMP tablespaces. And if you are using alter tablespace for datafiles moving/renaming it will take those datafiles offline. And this means that if your applications are trying to use data from those datafiles during offline they'll get errors. So usually you don't want to use this clause with production databases. Changes like these in production enviroment need to be tested first in test enviroment and then done in maintenance window with alter database clause and clean shutdown.

You can also use RMAN for renaming/moving the datafiles but that also needs to use offline.
Althought with RMAN the offline time is smaller because you can copy datafile into new path before you take it offline. But I prefer the planned maintenance window with alter database clause. 

Datafiles moving with alter tablespace clause:
----
1. Change the tablespace offline.

sqlplus / as sysdba
SQL> alter tablespace test_data offline;

2. Renaming/moving the datafile using operating system commands.
For example in linux:
cp /data/oracle/datafiles/test_data_01.dbf /data_new/oracle/datafiles/test_data_01.dbf
or renaming it at the same time:
cp /data/oracle/datafiles/test_data_01.dbf /data_new/oracle/datafiles/test_data_05.dbf

3. Use the alter tablespace clause to rename the file in the database.

SQL> alter tablespace test_data rename datafile '/data/oracle/datafiles/test_data_01.dbf' TO '/data_new/oracle/datafiles/test_data_01.dbf';

or renaming it at the same time:
SQL> alter tablespace test_data rename datafile '/data/oracle/datafiles/test_data_01.dbf' TO '/data_new/oracle/datafiles/test_data_05.dbf';

4. Change the tablespace back online. After this you can again use this tablespace all datafiles normally.
SQL> alter tablespace test_data online;

----


Datafiles moving with alter database clause:
----
1. Shutdown the database instance.
sqlplus / as sysdba
SQL> shutdown immediate;
2. Renaming/moving the datafile using operating system commands.
For example in linux:
cp /data/oracle/datafiles/test_data_01.dbf /data_new/oracle/datafiles/test_data_01.dbf
or renaming it at the same time:
cp /data/oracle/datafiles/test_data_01.dbf /data_new/oracle/datafiles/test_data_05.dbf
3.Start database into mount and rename/move datafiles in the database with alter database clause. You need to use full path and filenames.
SQL> startup mount;

SQL> alter database rename file '/data/oracle/datafiles/test_data_01.dbf' TO '/data_new/oracle/datafiles/test_data_01.dbf';

or renaming it at the same time:
SQL> alter database rename file '/data/oracle/datafiles/test_data_01.dbf' TO '/data_new/oracle/datafiles/test_data_05.dbf';
4. Open the database.
SQL> alter database open;
----

Friday, July 27, 2012

Undo tablespace shrink

If your Undo tablespace is overgrown and you need to shrink it. This can be done by creating a new Undo tablespace, start using it and remove the old one after that. Here is example:

----

Create new Undo tablespace with suitable size
SQL> create undo tablespace undotbsnew datafile '<path_to_the_datafiles_usually same_where current_undo_datafile_is>\UNDOTBSNEW.DBF'
size 1024m autoextend on next 5120K maxsize 32767M;

Set new tablespace as current Undo tablespace
SQL> alter system set undo_tablespace= UNDOTBSNEW scope=both;

Drop the old tablespace
SQL> drop tablespace UNDOTBS1 including contents;

-Don't use "and datafiles" clause in drop tablespace instead of that clear old Undo datafiles from OS side (Read my earlier "Cleaning Oracle logs / trace files / datafiles on the fly. " post for more information about this).

In Windows make empty text file test.txt
copy test.txt UNDOTBS1.DBF
In Linux:ssa do following:
cat /dev/null > UNDOTBS1.DBF

After you can restart database you can remove these old Undo datafiles from OS. Before database restart OS does not let you do it.


And after you have removed old Undo datafiles you can then recreate Undo tablespace with old name. But usually rhere is no need for that.

And remember that if you use auto extended Undo tablespace don't do to big DML operations without frequent commits. If you do you can get really big Undo tablespaces and other problems.

----

Monday, July 23, 2012

Cleaning Oracle logs / trace files / datafiles on the fly.

Regardless which OS you are using straight delete for Oracle logs / trace files / datafiles is not good option. Of course there is differences between files but usually these files used space is freed up only after database instance restart. So best way to clean these files is to make files size to zero. This way OS is freeing up the used space on the fly.

In *unix you can do it this way:
cat /dev/null > listener.log
and in the Windows you can copy zero size file over the orginal big size file.
When you are removing tablespaces do not use "and datafiles" clause in drop tablespace clause. Instead of that drop tablespace without "and datafiles" then change dropped tablespace datafiles sizes to zero (like above) from OS side.  And last you can delete datafiles from OS side after you have restarted database.

Friday, July 20, 2012

Recreate TEMP tablespace in Oracle

Sometimes you need  to recreate TEMP tablespace in Oracle. For example if it is overgrown and you can't shrink it.  With following sql clauses you can create new TEMP tablespace, start to use it and remove old one:

New TEMP tablespace name in this example is TEMP2 and current TEMP tablespace name is TEMP.

Create new TEMP tablespace:
CREATE SMALLFILE TEMPORARY TABLESPACE "TEMP2" TEMPFILE '<path_to_datafiles>temp02.dbf' SIZE 1024M REUSE AUTOEXTEND ON NEXT 640K MAXSIZE 6144M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1024K;


Start to use new TEMP tablespace:
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE "TEMP2";



Remove old TEMP tablespace:
(This removing can be done only if nobody isn't using old TEMP tablespace.
You can check if there is still users in old TEMP tablespace with following sql (If you are using RAC you need to run this in all nodes):
select a.username, a.sid, a.serial#, a.osuser, b.tablespace, b.blocks from v$session a, v$tempseg_usage b where a.saddr = b.session_addr order by b.tablespace, b.blocks;

If there is users in old TEMP tablespace you need to kill those sessions or wait until they are ending normal way.

After there is no open sessions using old TEMP tablespace you can remove it with following sql:
DROP TABLESPACE TEMP INCLUDING CONTENTS AND DATAFILES;

Wednesday, July 18, 2012

Oracle 11.2 RHEL 6 VMware and ASMlib

If you need to install Oracle 11.2 into RHEL 6 VMware servers you face problems with Oracle ASMlib.
ASMlib is not supported anymore with RHEL 6. Oracle recommends that you use Oracle Unbreakable Linux because it's support ASMlib. And that is not bad choice at all.

But if you need to stick with RHEL 6 then you have to do it without the ASMlib. And you can do it with udev rules. If you are not using VMware you can use also multipath choice but with VMware you need to use udev.

I assume here that all disk are already added into the needed virtual machines.

First you have to enable scsi id's from VMware. If you don't do this you does not see disks id's in RHEL.
Shutdown virtual machine and do the following  (this needs to be done all servers if you are installing RAC)go to the Options tab --> select the Advanced -> General on the left and click the Configuration Parameters button. --> Click the Add Row button and add disk.EnableUUID with value true --> Ok --> reboot Virtual machines

More info about SCSI UUID's from here:
http://diznix.com/2011/05/21/the-case-of-vmware-and-the-missing-scsi-id/

Then create udev rules to find the disks and to create symlinks to them (this needs to be done all servers if you are installing RAC):
Create file /etc/udev/rules.d/99-oracle-asmdevices.rules

File can look like this (in PROGRAM you fetch the SCSI_id which should match the correct disk RESULT value. And SYMLINK creates accepted symlink for the disk.) :

# crs
KERNEL=="sd*1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u --device=/dev/$parent /dev/$name", RESULT=="36000c29de840f074b4c0ecd97719d3c0", OWNER="oracle", GROUP="dba", MODE="0660", SYMLINK+="sdASMcrs"
# data
KERNEL=="sd*1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u --device=/dev/$parent /dev/$name", RESULT=="36000c29d5b7c9a2cf306acf4dbc2d6de", OWNER="oracle", GROUP="dba", MODE="0660", SYMLINK+="sdASMdata"
# arch
KERNEL=="sd*1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u --device=/dev/$parent /dev/$name", RESULT=="36000c29051374180400e5599ae556288", OWNER="oracle", GROUP="dba", MODE="0660", SYMLINK+="sdASMarch"
.
.
.

Last reboot the servers and check that disks are visible correctly in all of them and start the installation.

Tuesday, July 10, 2012

Grid Control 11g (OMS) repository database moving into new database and passwords

If you need to move your Grid Control (OMS) repository database into new database. You probably do it with following guide (this can be find from MOS (My Oracle Support)):
11g Grid Control: Steps for Migrating the 11g Grid Control Repository from One Database to Another [ID 1302281.1]

This guide will work if you remember to use EE database and always run scripts with correct user. Best way to run individual scripts is sign out and in into sqlplus always between different scripts. This way you always check that the user is correct.

Problem with this guide is the Grid Control users passwords. The scripts that this guide use will move also Grid Control users but it recreates those users passwords. After move is done Grid Control console users passwords are same as the usernames except all passwords are writen with caps.

If you want to change passwords back like they where in old database you can do it this way:

Check username password hash from old Oracle 11 repository database with following sql (older Oracle version did show password hash in the dba_users password column but it is not anymore there in Oracle 11 (this is because security reasons)):
--
select REGEXP_SUBSTR (DBMS_METADATA.get_ddl ('USER','USERNAME'), '''[^'']+''') PASSWD from dual;
-- 

and after that change user password into new repository database with following sql (and use the hash you get from the first sql):
--
alter user USERNAME identified by values 'S:91F217F4280C854E10D66C2C79E729C55B45E26DAD30297D8705542EF5B5;45BE3895069161E2';
--

You must run above scripts for all Grid Control console users to change their passwords.

And of course you can use same sql's for another Oracle 11 password changes where you need for old password. For example if you need to use older exp/imp tools and you need to create users into new database before import.

Monday, July 9, 2012

Oracle Grid Control database licences

Oracle Grid Control needs Oracle EE version database.
Oracle Grid Control 10g and earlier versions did contain embedded EE version database.
But after that there is no more embedded database included in installation packages. But Oracle let you install single instance Oracle EE database freely for Grid Control use only. So the good news is that you still don't need to buy EE license to run newer Grid Control versions to monitoring your Oracle databases.

Thursday, June 28, 2012

Compile all invalid objects in Oracle 10.2 ->

If you have several invalid objects in Oracle database you can try to compile them all together with this in sqlplus (user that is running this needs rights to execute utl_recomp package):
exec utl_recomp.recomp_parallel(4);

recomp_parallel let you set degree of parallelism. 4 is good default.

There is also possible to use recomp_serial with utl_recomp.
If you need more info about utl_recomp you find it here:
http://docs.oracle.com/cd/E14072_01/appdev.112/e10577/u_recomp.htm


And you can check database invalid objects with this:
select object_name, object_Type, owner from dba_objects where status='INVALID';

You can also use all_objects if the user you are using does not have rights to use dba_objects.

Failed login attempts in Oracle 11.2 database

After moving Oracle 11.2g database into the another RAC cluster we faced a lot of failed login attempts into Grid Control. We needed to know where these login attempts are coming. And with following sql we did get enough information to tackle these logins:

select to_char(TIMESTAMP,'YYYY-MM-DD HH24:MI:SS') tstamp ,OS_USERNAME,USERNAME,USERHOST,ACTION_NAME, TERMINAL from DBA_AUDIT_SESSION where returncode = 1017 order by 1;

returncode 1017 stands for Oracle error "invalid username/password; logon denied"
and thats why we get those failed login attemps with it.