Single Tenant ist unnütze Komplexität? / Single Tenant is only unnecessary complexity? - Remote refreshable clone

My presentation (in german) at the DOAG conference was about Single Tenancy and why (some / a lot / most) people do think it is unnecessary complexity or - even worser - it is unnecessary at all.

Yes, my first thoughts, as container based databases have been introduced in 12.1 were the same. I found that it is unnecessary at all. With 12.2 I changed my mind (a little bit or even a little bit more) as there are really nice features included, which can be used for free, even on Standard Edition 2.
I don't they, you should or must use these features, but as a Consultant and DBA I always want to help my customers doing things better or to free up the work life from boring stuff.
And what is more boring than staying in the office or working from home at the night or at a weekend when you could have time to play with your kids, make a trip to a cool event or meet some friends?
So if you can't live without spending times outside of your office hours with work, don't read this article further. 😏

For the DOAG I have created a live demo scenario on my notebook (with a Linux VM), so I would like to share not only my knowledge, but also part of my scripts, so that you can test everything on your own.

What you do need to use the scripts as they are, so you can copy paste them is not that much.

Preparation:
  • 2 Container databases XFEE1 and XFEE2 (XF is my naming convention on test systems for XFS file systems, AC for ACFS and AS for ASM, EE means Enterprise Edition, SE means Standard Edition - the tests do work also with Standard Edition Databases and on ACFS/ZFS without any changes).
  • 1 Pluggable database EE1PDB1 inside of XFEE1
  • TNSNames.ora entries to reach XFEE1, XFEE2, EE1PDB1 and EE2PDB1 (EE2PDB1 is later on used for cloned/snapshot cloned PDB1 inside of XFEE2)
  • You need to set CLONE_DB true (alter system set clone_db=true scope=spfile;) for non-ACFS/ZFS file systems. Restart the database after setting it.
  • To not violate any license rules you can also set MAX_PDBS to one (alter system set max_pdbs=1 scope=both;)
  • Set all passwords to "oracle" (or change the scripts)
Case 1 - Refreshable Remote Copy of a PDB:

Some people do think about using the refreshable remote copy of a pdb as a poor men's standby database, but the best use case I can think of is setting up test environments (for applications or even for database updates) from your production. Another use case is to create a reporting PDB which is automatically refreshed ones a day because you can open the refreshable remote copy read only (and as long as it is opened read only no refresh happen).

Think about the following case: You do have your production database up and running and you should create a test environment. The business guys tell you, that they need the test environment in 3 days and the data should be the one after business hours or more horibble at saturday night.

What is your way to do this? Creating a standby database with Data Guard (not for SE2) and stop recover at the evening you should do? So you can prepare things and don't need to work a lot that evening? Or do you prepare an empty database and wait for that evening, then run an export on the production and an import at the test environment? How long does this take and how many hours do you spend that evening doing all this work?
A quick, fast solution is to create a refreshable remote copy of a pdb. All you need to do is to set it up the days before "that evening" comes, you let it refresh automatically all minutes and all you need to do at "that evening" is to stop the refresh and open the database. Maybe, due to application reasons, you need to run a SQL script which is always needed (also with Standby or with export/import). If you have tested that once or twice, you could even create a cronjob that does a last refresh and opens the database / run your application script when it is time to. So you don't need to work at the evening or weekend at all. 😊

If you want to test that, you need the things I have mentioned at the preparation part of this post. And then you can just run the following scripts.
The steps in the script are the following (you don't need to create the common user at the receiving database, I have just re-used the picture):


  1. Create a common user and grant him rights
  2. Create a database link to the PDB you want to clone from the CDB$ROOT you want the PDB cloned to.
  3. Create the refreshable PDB clone and let it run until you need it
  4. Open the automatically refreshed PDB read-write
Script 1 to run at the CDB$ROOT of the production.
Script 2 to run at the CDB$ROOT where you want the PDB cloned to.

Maybe you try to refresh or set the refresh mode to none (see script 2) and you get an

ORA-65118: operation affecting a pluggable database cannot be performed from another pluggable database
 
If so, just set the container to the pdb ( alter session set container=EE2PDB1; )  and try again.

Hope that will help you at your daily work. Stay tuned, there are more use cases where it is really nice to use PDBs instead of doing it "the old way". I will write a new post as soon as I have time to.

As always, comments are welcome.

You can find the second post regarding remote snapshot clone here

DOAG Conference 2018

Hi Folks,

last year I have written a small review for the DOAG conference and I want to do the same for this years conference.

I was - again - a happy speaker at the DOAG conference 2018. My presentation was about "Single Tenant is only unnecessary complexity?" and I held it at the last conference day at noon. Despite the key note before my presentation and despite of the time where all people wanted to go to lunch, I think I had more than 100 people attending it. In 45 minutes (I had a slight overtime of some seconds only) I presented some features of Single Tenancy on Oracle databases - including a live demo to show how easy it is to do in reality. As a lot people after the presentation asked me how to do this or that (they started to think about the requirements they have at home), I think it was a valuable presentation at all.

It was good to do this, even a week before I have held the presentation, my VM crashed and was unreparable and I was thinking (for some minutes only) to skip the live demo part. But - luckily - I had a one week old backup of the system so I needed only two night shifts to get everything up running again. You could have smelled my fear minutes before I started the demo - I really hoped that nothing crashes at the presentation...

Beside my own presentation the 3 days were fully packed with know how provided by well prepared speakers. A trend I have seen last year also continued this year: In my opinion the number of live demos is dropping more and more. Unfortunately, in my opinion, because I like presentations where you can really see "things are running". So, if you plan to speak somewhere at a conference: Yes, it is a shitload of work preparing everything and testing and preparing more and testing again - but it's worth it guys (by the way, my live demo was a one week workload - including the crash)!

What else, hm... the community: Even if some guys don't think it, but Oracle is "still alive" and the community is also. More than 2'000 people found their way to Nuremberg and more than 400 speakers including really famous international speakers (and Oracle product managers) were there. And the best is they are approachable - all of them can be found at the coffee lounge or somewhere else. So you can discuss, you can ask, you can get solutions for free (what is especially worthful when the Oracle support does not seem to make the job very well like it was told us at a session regarding support satisfaction). Stefan Köhler, e.g., was just sitting two afternoons at the coffee lounge - ready to discuss about performance with everyone.

Another thing I have seen at the conference - the interest to know more about alternative databases is high, so open source database talks were fully booked out. Especially when PostgreSql was mentioned at the conference planner, the rooms were filled until the last place. In my opinion, open source databases will get a bigger piece of the cake (so robotron and therefore I also will do up to 24/7 support on PostgreSql), nevertheless, they are not soooo free at all if you make a TCO calculation.  While Oracle database is a big, feature-rich database monster out of one hand, you do have in open source one main project, but if you want to use additional features you need some additional software here, a plugin there, ...

At the end, you either throw some money into a consulting company, to get help for the community software, or you throw some money into a couple of internal guys who will find out what you need and how you get it from the communities - or you buy a "open source software" by one of the vendors around the communities. The problem then is the same as with Oracle. You will have some kind of vendor lock and knowbody knows if these companies do exist in 5 or 10 years or are bought by someone and the "product" maybe is not supported anymore. It is not so easy at the moment to make the right decisions (at least I think Oracle will still exist in 10 years also). And the big companies like Amazon or Salesforce only give a small part of the development, they do, back into the community versions. As I said, there will be a rising number of open source database installations, but I think also Microsofts SQLServer and Oracles database will be the two biggest fishes in the next couple of years.

In addition to technical details there are a lot things you can hear about makeing better projects, about AI or Machine Learning, about development or DSGVO, Cloud, ...

Some last words to the organisation: It is astonishing for me how this all works year for year - especially if you know that a big part of work is done in the spare time of the DOAG members. Organizing stuff for more than 2'000 people, including food, the key notes, the party, all community actions (like apex, PL/SQL, ...), incredible. And the location - with it's different rooms and restaurants at each floor it is really perfect.
My conclusion: if you didn't had time to join the conference (it would be good to speak german as most presentations are hold in german) - try to come to Nuremberg next year (19.-22. November 2019). You hopefully find me also there again...

For the guys looking for the scripts of my presentation: You can find them here

My first steps with Oracle Autonomous Transaction Processing Database (aka Autonomous Transaction Processing Service)

The autonomous transaction processing service or autonomous transaction processing database (service) was released some weeks ago. I will write some more posts in the next weeks when I've done more tests with this kind of service.

Today I want to bring a little light into some questions like:
 - how to set up an autonomous transaction processing service in the Oracle cloud?
 - what is the autonomous transaction processing service?
 - what can and what can't you do with this service at a first sight.

Maybe you are an Oracle DBA which has some knowledge how to setup your environment in the Oracle Cloud. So you are used to how to create your VLANs with the firewall rulesets, your nodes, your databases, your database services, your connections, you know the APIs you need to configure your cloud environment using cmd or python or... All things are configured by YOU.
When you start with the autonomous transaction processing database, you need nothing of this knowlege.

What do you need is a compartment and the right to create a autonomous transaction processing database. That's it folks!
Then we can start creating the service - documented step by step.

Setup your ATP-Service


1.) Go to your Service Overview at your Oracle cloud portal for the ATP(D) service:


2.) Press CREATE and the following screen opens.  You need to fill it out, but, as you can see, you don't need to specify any network things like you need to do e.g. for a "non-autonomous" database service.
You can specify only the name, the number of CPU Cores and the storage size and you will get an ADMIN user for which you need to specify a complex password. Last step is to subcribe either with license or to bring your own licence.



Then you return to the console where you can see that the database is getting provisioned.



While you may know from a "non-autonomous" database service that it runs some longer time to provision your environment, the autonomous transaction processing service is ready after a couple of minutes. 

3. Finished after 15 minutes!

 But the question now is, how can you access this new service as you don't have specified any rules, VLANs, additional nodes ... ?

So we need to

Connect to your ATP-Service

 1. First step is to use the ADMIN user with your password to access the detail page of your fresh created service:

 After you have signed in, you will find yourself at the service console, which is loading...


 2.) When you press the Administration link (marked red at the screenshot above) it will take you to the Administration page, where you can download the Client Credentials.



3.) After setting a password, the download of a zip file starts. You can save the zip file wherever you want. Some people now download the instant client to connect to the database, but as I do have some full installations of databases and clients at my laptop, I have integrated the connection in my normal environment.

4.) I already do have my special directory with a TNS_ADMIN variable set to, because I don't want to have a couple of different locations where I store different sqlnet.oras or TNSNames.oras. To allow my environment to use the SQLNet protocol to connect to the autonomous transactino processing database service I have copied the zip to my TNS_ADMIN directory and unzipped it locally, but you can do this with your own installation(s) like you are used to it:


Now the last steps regarding the local configuration:
5.) a) First I needed to add the sqlnet.ora properties from my wallet_ROBAUDB directory at the end of my local sqlnet.ora (so should you), so it looks like (see last, red marked, two lines):


b.) The second - and last - step isto add the tnsnames.ora properties from the wallet_ROBAUDB directory to your tnsnames.ora:


The tnsping succeeds and you are able to connect to the database:




Personally, I loke to use additional tools, and I am also able now to connect to the service with any other SQLNet related product, for example with my PLSQL-Developer:


 First Selects at the Service

The questions I had e.g. while started with the ATP(D) service: What do I really get with this database service? As it was that quick provisioned, is it a complete DB? And yes, I have heard that all autonomous database services should run as RAC with additional Data Guard setup, but is it a full database I can do with whatever I want (and with all the complexity), like it is with a "normal" database service in Oracles cloud?

To answer the questions simple - no, it isn't, what you get is a pluggable database, so you do get one container from a CDB:


You can have a look at the parameters, which are set, e.g. memory parameters:


But you aren't not allowed to change them:


What you can do is e.g. create a new user (there is a password policy on it) and afterwards start with deploying your application from scratch (using sql scripts, sqldeveloper, ...) or you can put your application dump into it.


Enough stuff for my first post regarding the Oracle Autonomous Transaction Processing Database.

The next post, I think, will be about how to get your data into this service and a little bit about performance and/or security.

Stay tuned!

DBT-00007 User does not have the appropiate write privilege when starting dbca

At the moment I am preparing some virtualboxes to create the live demos for my presentation about "Single Tenancy is only more complexity!?" at the DOAG Conference 2018.

For this, I have setup a fresh new Linux and installed 18c Grid Infrastructure into it (using role separation, so GI is installed with the grid user, the databases should be installed with the oracle user).

After I have created my 2 ORACLE_HOMEs with the Standard Edition 2 and the Enterprise Edition database (Software only) I wanted to set up the first databases, but unfortunately the dbca (Database configuration assistant) struggled with "a [DBT-00007] User does not have the appropriate write privilege when starting dbca" error. Since I NEVER have seen that error before, I really was wondering, what happens (and I don't think I have missed some steps). And no, I don't want to check this with the Oracle support, like mentioned at the popup window details.

I have setup my ORACLE_BASE and ORACLE_HOME before I have started dbca, so this isn't the cause for this error. As the popup window does not shown anything in addition, I tried to check the log of the dbca at $ORACLE_BASE/cfgtoollogs/dbca - but there wasn't any.
So the problem seemed to by correlated to this directory.

The owner of $ORACLE_BASE/cfgtoollogs was grid (group oinstall) and the rights set were 755 - so only the grid user was allowed to write, group members weren't.

To change this I have first changed the owner of the cfgtoollogs directory:

chown -hR oracle:oinstall /u01/app/oracle/cfgtoollogs
and afterwards I have given write rights to all subdirectories, so also the grid user would still be able to add his stuff to the log directory:

chmod 775 /u01/app/oracle/cfgtoollogs/
chmod 770 /u01/app/oracle/cfgtoollogs/dbca
chmod 770 /u01/app/oracle/cfgtoollogs/asmca
chmod 770 /u01/app/oracle/cfgtoollogs/netca


That's it folks, the dbca started after I have made these changes.

Fix for "The better quality empty array" available!

Some of you may have read my post from last October The "better quality" empty array - or: Why doing nasty things can help avoiding ORA-00600!

In there I told you how you can identify the cause and how to build a workaround around the ORA-00600 [pmus.c#101] and/or ORA-00600 [pmus.c#103] errors while using arrays in packages. Now I want you to update about this error.

The error was officially accepted as a bug, you can find that at support.oracle.com:
→ Bug 27242226 : ORA-600[PMUS.C#103] WHILE CREATING/EXECUTING PACKAGE IN SINGLE BYTE CHARACTERSET.

The good news - this bug is fixed for Oracle DB Version 19 and you can download the patch for 12.1.0.2 and 12.2.0.1 as a fix, if you need it for any special version, open a SR and ask the support to get it backported. 

Backup Oracle Database from MS Windows using UNC to Oracle Database Appliance (Samba / CIFS) and Oracle CloudFS

Every customer will help us to build up additional knowledge (in other words, they request something you haven't done or needed somewhere else before).

My actual customer has bought two Oracle Database Appliances, ODA X7-2 HA, and they are migrating all Oracle databases with different versions from the old, VMWare based, Windows environment to the ODA (Virtualized Platform).

After the successful deployment of the ODAs in the customers vlan (maybe worth another post) the question was, how to put the things from the old environments to the ODAs.

Secure copy (scp/Puttys pscp) is ok, but either you need to install putty everywhere or you need to map all server drives at your client and (p)scp all to the ODAs,... not really a fun task.
Also you have an additional step if you create e.g. the RMAN backup locally at the Windows server and then (p)scp it to the ODA. You know it's an additional step which means (down) time and money, so we wanted to avoid this for the time of the migration.
As nfs was not a solution (due to some internal configuration issues) we thought about creating a direct connection between the ODAs and the Windows servers.

CloudFS is a nice feature of ACFS (not only) on the ODAs, but it can't be used without some additional configuration to map a drive at the Windows Server or to use UNC from  RMAN or as directory for Datapump Exports to the ODA.

The customer wants to allow the Windows Server for the time of the migration of all databases to write from any machine directly (without to set any user/password) to the CloudFS on the ODA X7-2 HA, so we had to configure Samba (smb) at the ODA.

If you may need it, here are the steps you can walk through if you want to use CloudFS within UNC paths or if you need to map the CloudFS directory from the ODA to your Windows Server without user/password.
  1. Ensure, that CloudFS is configured and sized right to have enough space for your files. Typically, CloudFS is created when the ODA_BASE is deployed, but you can skip this step. If you have skipped it, you can create CloudFS using the asmca (ASM Configuration Assistent) or you can do it manually. I didn't need it, but I think Matthew described in his cloudfs post very well, how to do it if you don't want to use asmca. Even resize can be done very easy by following this oracle support note.

    To check the existence and the size of your CloudFs, run
    df -h /cloudfs

    CloudFS is mapped on both ODA nodes to /cloudfs. Owner:group should be oracle:oinstall.

  2. Configure Samba at the ODA_BASE (as root), part GLOBAL
    You need to edit the smb.conf file which is located at /etc/samba:
    vi /etc/samba/smb.conf

    Skip to - roundabout - line 100 (Standalone Server Options). There you can see this configuration:
        security = user
        passdb backend = tdbsam

    Change this (even if it's marked deprecated to use security = share) to:
       security = share
       passdb backend = tdbsam
       guest account = oracle


  3. Configure Samba on both nodes with the name and the rights of your share.
    Now it's time to bring your share configuration into the smb.conf file. At approx. line 255 the "Share Defnitions" part starts. There you do need to put in the configuration for your share. The part in [ ] is the name you later see if you use your Windows explorer - so it's the public name of the share: We named it ODAPROD_CLOUDFS, but typically only CLOUDFS is enough.

    [ODAPROD_CLOUDFS]
    comment=cloudfs files
    path=/cloudfs
    browsable=yes
    public = yes
    writable = yes
    printable = no


    After you have inserted this write the smb.conf and quit from vi.

  4. Copy the smb.conf to second node
    To be able to use the share "high available" later on, copy the changed smb.conf from your local nodes /etc/samba/smb.conf to the second node using scp.

  5. Start / Stop the Samba Service on both nodes.
    You need to start the service at both nodes manually.
    To do this, connect as root to ODA_BASE on both nodes and run
    service smb start

    If you want to stop the Samba share, you can run
    service smb stop

  6. Test UNC path from Windows
    You can now test the connection. Open a File Explorer Window at MS Windows. Then you should be able to use the share. Don't use the node names to connect to the share, use the SCAN name of the system, so you are able to use the share even if one of the nodes is not available, e.g. while reboot.

    In my case, we can now use the UNC path from Windows without to specify a user/password to write into /cloudfs at the ODA using in Windows explorer:
    \\ODA-SCAN\ODAPROD_CLOUDFS

    You could also map the share as drive in Windows using the net use command.

    Now we can make backups from the Windows server direct to the ODA, using e.g.
    backup database format '\\ODA-SCAN\ODAPROD_CLOUDFS\%U';

    In addition, we don't need to install putty on all windows servers to pscp e.g. the VM templates for the application VMs we want to use at the virtualized platform.

Some remarks:
  • As this configuration don't need a username and password, you should start and stop the smb service explicitly - don't let it run all the time, start it when needed and stop it if you have put everything onto the ODA.
    It's a question of security and you sure don't want that someone uses the storage of the ODA to put some photos or videos on and share it. 
  • If you need the cloudfs share as a long term solution, use the standard configuration with security, add your share to the smb.conf and map the drive on the MS Windows systems you need it.
  • I don't have registered the smb service to the cluster infrastructure (as it is - in our case - only started when needed). If you want to keep the smb share(s) (so SMB is automatically started after ACFS/databases/...) you can configure that - the solution would be: crsctl add resource
    An example can be found (for Samba) at the documentation of Oracle (this link is for version 12.1)

ODA X7-2 HA ILOM configuration using serial cable

We were setting up an Oracle Database Appliance (ODA) X7-2 HA at a customers site with 12.2.1.4.0 virtualized platform the last days.

One of the things that are a little bit strange is how to set up the ILOMs at the ODA nodes, if you don't have DHCP.

There were a couple of different possibilities until and including ODA X6-2 HA to set up the ILOM IP addresses and the ILOM hostnames.
  • Use DHCP on the ILOM copper interface. Typically, the ILOM network is a total separated network used only for managing servers. So, there isn't any DHCP in this part of the network.
  • VGA connector and usb keyboard using ILOM configuration at BIOS bootup. The Oracle Support note describing this approach is How to Setup the ILOM network for Pre-Deployment During BIOS Bootup (Doc ID 1393191.1), but unfortunately there isn't a VGA connector at the ODA X7-2 HA anymore (I think you could connect using an USB hub, but who has this kind of stuff in a Data Center??). I personally liked this approach with the older ODAs as you have a display and a usb keyboard somewhere at a data center..
  • Third is using a serial cable (USB to RJ45 - also often called Cisco Console Cable) - this is typically cheap and available at data centers and therefore now my preferred way of setting up the ILOM. This is the approach I do like second most.
  • Another way is using IPMItool - I have never tried this, because I never found the software - which I can install on my Windows laptop - within a minute or two searching.
    I only found some links inside of the documentation. If you may want to try it, you can download it (would be nice if you put the download link into a comment under this post - also if you already use it) and follow metalink note ILOM network configuration via ODA console (Doc ID 1936081.1). Would be nice to add your experience into your comment to this post in addition, so this is maybe helpful for others.
As my absolutely preferred method is not available anymore, I started using my console cable to setup the ILOM at the ODAs - because this also worked very well everytime we needed it.

I was really surprised as I wasn't able to connect to the ILOM.
Ok, I already knew that inside note ILOM configuration via Serial port (Doc ID 1395445.1)  COM1/COM9 was used and I needed to look at my device manager to find out, which COM port is assigned with my console cable to use with putty (for me it was COM7). But using 9600 baud, Data bits 1, Stop bits 1, Parity None and Flow Control XON/XOFF there were no chance to see anything except a green block.
Pressing Enter doesn't change the screen to a logon screen. Repowering the system at least showed us some junk characters. I was looking at some different ILOM documentations of Sun servers and still everything seems to be correct with my settings.

The solution was to set up the connection with 115200 Baud as connection speed. I first asked some guys which are also setting up ODAs from time to time and most of them didn't knew that. But then one gave me the right hint. So it was working now, I was able to connect to the ILOM as root and made my changes using all of these nice set /SP/network statements (even if some notes/documentation may told you, you can do them together, you can't, do separate lines for each setting). Afterwards I did my re-image to use the virtualized platform.
 
Later on, I was reading Note 1395445.1 again and guess what - I have found the hint that one should use 115200 if it is not working. But the hint is somewhere at least I wasn't looking in detail on it.
As I already new that COM1 will not work, I skipped to step 4 at the note. This is, where you have found YOUR COM port you need to use and try to connect to the ILOM. The screenshot there does not tell you anything about using 115200 - it is showing 9600 - and also the text does not give you any hint. No, testing with 115200 Baud was written on step 2 - the step where told to set up 9600 baud and you need to follow step 3 to find out the right COM port. Also afterwards, I have found a note for Sun Server X5-2 which tells that 115200 is the right ILOM speed (Doc ID 2095378.1).

Lessons learned? I will start reading support notes from the beginning again, even if (I think) I know what's in step 1-3. Especially, if I have done things often before. It's a little bit like they change the right of way or a speed limit at a route you use daily to drive to work...

Lost entries at /etc/oratab with Oracle 12.2 / 12c R2 / 18c

I was working at a customer database environment, setting up nearly 30 new databases with oracle 12.1 and 12.2 Standard Edition and Enterprise Edition version. For the datafiles we used ASM with 12.2 Grid Infrastructure for Single Instances (with Restart) and the newest version we got(April 2018).

One of the things I was really going mad was, that some entries at oratab were gone after stopping and/or starting the database with srvctl or by rebooting the nodes. .

First I was thinking about a bug at ASM, because there was also one at 12.1.
But after a while I found a nice note at https://support.oracle.com which states, that this is a new default behavior and not a bug.

In detail, starting with 12.2.0.1.171017 GI RU/PSU (or any version later), the oratab entries are removed automatically, if they have been "added by agent".

The note "Applying 12.2.0.1.171017 RU (patch 26737266) To 12.2 Cluster Removes Oratab Entries (Doc ID 2329359.1)" tells us, that "The agent deletes the oratab entries that have comment "# line added by Agent". For example, when the GI is recycled (including after a node reboot), then the agent will remove the oratab entries that have the comment "# line added by Agent"".

Unfortunately, it was not enough to just remove the comment "# line added by Agent", like it is written at the document - we needed to put own comments "# manually added" at the end of the lines.

Why is Oracle doing this as new default behavior? "With the introduction of the flex ASM (asm may not run on all nodes) and policy managed databases (no instances is tied to a server), the mapping of the asm and RAC database instances to a specific node is no longer guaranteed to be valid, so this is the reason that the oratab entries are being removed."

So if you run into this problem with GI version 12.2 or later (I think this will also happen with Oracle Database Grid Infrastructure 18c) and you want to keep the entries, just add your own comment at the end of every line you want to keep.

If you don't know it, you are going mad, if you know it - life can be easy. 

P.S. If you need to find the names of the databases, this blog entry could be helpful: https://fritshoogland.wordpress.com/tag/recreate-oratab/

New Achievement :-)

Nice Certicificate for reaching "Expert" Level at My Oracle Support Community.





Vorträge im Juni 2018 / Events in June 2018

Dear english readers - as the events are in german, I do write this blog entry also in german only. Thanks for your understanding.

Auch im Juni gibt es wieder eine Reihe von Vorträgen, die sehenswert sind. Da ich an beiden Terminen selbst als Sprecher auftrete, liegen mir diese beiden natürlich besonders am Herzen:

Zum ersten der Fokustag "Oracle" am 12.06.2018 bei robotron in Wil, die Agenda ist sehr interessant und anmelden kann/sollte man sich da auch schon heute - bevor alle Plätze vergeben sind. Themen unter anderem: Database 18c New Features und (unbekannte oder unbenutzte) Features der Standard Edition 2 bzw. der Enterprise Edition Datenbank ohne zusätzliche Packs, wie z.B. Diagnostics und Tuning Pack.

Der zweite Termin ist der Special Day der Swiss Oracle User Group (SOUG) zwei Tage später, also am 14.06.2018 im Trafo in Baden - auch ein Muss-Termin für alle, die in der Schweiz mit Oracle Produkten arbeiten.

Es würde mich freuen, einige neue und natürlich auch alte Gesichter an den Events zu sehen. By the way - wer es noch nicht mitbekommen hat, mein Arbeitgeber Robotron Schweiz GmbH ist jetzt SOUG Sponsor!

Dynamic datapump export without expdp client using DBMS_DATAPUMP API and SQL*Plus (also for PDBs)

Long time, since I have written my last blog entry, but I was busy with a bigger customer project, where we needed to migrate roundabout 30 databases to a new environment.

Nearly finished with the project, I got a script from our customer to schedule it with cronjob.
The script should do a full export every Sunday and a schema export of all schemas that don't use the SYSTEM or SYSAUX tablespace as default tablespace all other days.

What I have seen then was unbelievable, I nearly couldn't stop head-shaking.
The former consulting company created a procedure that was stored in the SYS user. Not worse enough, this procedure was scheduled from inside the database (using dbms_jobs) and the only job was - do you sit? - creating a linux shell script dynamically, with a full export on Sunday and the schema exports on all other days.
The last step and the next head-shaking was, that this dynamically created script afterwards was scheduled in cronjob. I don't need to mention that the export wasn't made consistent using any FLASHBACK_ parameter.

What the hell...

First rule: Never put anything in the SYS schema of the database, especially not when it is scheduled inside the database.
Second rule: Don't do things more complicated, as they are, so use your brain but let the things to run to be simple and understandable (I extra don't say stupid).
Third rule: Don't rely on a job cascade combining inside and outside database schedules. Worst thing here is that the dynamic shell script could not be written by the database, so if you add e.g. a schema, this will never be exported - and you may also never get a full export (or you get a full export all the time).

Well, these three and a lot more topics convinced me to streamline the export a little bit:
  1. Keep it simple and easy
  2. Make only one job
  3. Don't put any own code into the SYS schema
  4. Make the script dynamic
  5. Make the export consistent
  6. Add own logfile entries

So I created a new script, you can copy and paste it into your notepad/vi and save it (maybe you need to deal with wrong quotation marks after copy/paste). The script exports to the temp directory with the DB_UNIQUE_NAME as subdirectory. Please create this directory before you start the export or add "host 'mkdir -p /tmp/&lv_db'" after the create or replace directory statement to create it also dynamically.

Afterwards, you can run the script by just start sqlplus as a SYS/SYSTEM user and provide the script name, e.g. if you save the script as JoergsDatapumpExport.sql you can start the script after setting oraenv by calling $ sqlplus / as sysdba @JoergsDatapumpExport.sql .

Hope, this script is helpful for someone - if you connect e.g. to a pdb using sqlplus and if you have added the mkdir part, you can dynamically export from any pdb. There are also some enhancements I have done for my test environment you could do: Adding parallel export support depending on the numbers of cores you can use or you can stop the datapump export if there is no schema to export - before ugly stuff is written into the logfile.

Comments are welcome.




WHENEVER SQLERROR EXIT FAILURE;
set serveroutput on
set feedback off
column db_unique_name new_value lv_db
select db_unique_name from v$database;
create or replace directory "DATA_PUMP_DIR" as '/tmp/&lv_db';

declare
l_dp_handle NUMBER;
v_job_state varchar2(4000);
v_day varchar2(100);
v_host varchar2(50);
v_db varchar2(20);
v_schemaexport varchar2(1000);
l_current_scn number;
cursor c1 is
select username from dba_users where default_tablespace not in ('SYSTEM','SYSAUX')
and username not in ('PERFSTAT');

BEGIN
select rtrim(to_char(sysdate,'DAY')) into v_day from dual;
select host_name into v_host from v$instance;
select db_unique_name, current_scn into v_db, l_current_scn from v$database;

if v_day in ('SUNDAY','SONNTAG') then
l_dp_handle := DBMS_DATAPUMP.open(operation => 'EXPORT',
job_mode => 'FULL',
remote_link => NULL,
version => 'LATEST');
else
l_dp_handle := DBMS_DATAPUMP.open(operation => 'EXPORT',
job_mode => 'SCHEMA',
remote_link => NULL,
version => 'LATEST');
end if;
DBMS_DATAPUMP.add_file(handle => l_dp_handle,
filename => v_host||'_'||v_db||'_'||v_day||'.dmp',
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE,
reusefile => 1);
DBMS_DATAPUMP.add_file(handle => l_dp_handle,
filename => v_host||'_'||v_db||'_'||v_day||'.log',
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE,
reusefile => 1);

DBMS_DATAPUMP.SET_PARAMETER(handle => l_dp_handle, name => 'FLASHBACK_SCN', value => l_current_scn);

if v_day in ('SUNDAY','SONNTAG') then
DBMS_DATAPUMP.log_entry(handle => l_dp_handle,
message => 'FULL Export.');
else
begin
DBMS_DATAPUMP.log_entry(handle => l_dp_handle,
message => 'SCHEMA Export (without schemas with SYSTEM and SYSAUX default Tablespace).');
v_schemaexport := 'IN (' ;
for c1_rec in c1 loop
begin
if c1%rowcount=1 then
v_schemaexport := v_schemaexport||''''||c1_rec.username||'''';
else
v_schemaexport := v_schemaexport||','||''''||c1_rec.username||'''';
end if;
end;
end loop;
v_schemaexport:= v_schemaexport||')';
DBMS_DATAPUMP.METADATA_FILTER(l_dp_handle, 'SCHEMA_EXPR', v_schemaexport);
end;
end if;

DBMS_DATAPUMP.log_entry(handle => l_dp_handle,
message => 'Please wait until the export is completed.');

DBMS_DATAPUMP.log_entry(handle => l_dp_handle,
message => 'Export Parameters: Flashback_SCN='||to_char(l_current_scn),
log_file_only => 0);

DBMS_DATAPUMP.log_entry(handle => l_dp_handle,
message => 'Start time: '||to_char(sysdate,'DD.MM.YYYY HH24:MI:SS'),
log_file_only => 0);

DBMS_DATAPUMP.START_JOB (l_dp_handle);
END;
/
exit;


How to change the timezone on ODA lite models

At a customers ODA (the one we have deployed using a json file) we have seen a problem right before going into production. The timezone was set wrong due to a typo mismatch in the json file. CET is the Central European Timezone while ECT is the Ecuador Timezone and in addition to this we have also used UCT.
Redeploying is not funny if you have everything installed you need (including KVMs) on the ODAs, data guard works and the users finished the application tests already.

So we needed to do a change of the timezone without redeploying, and, as the system is placed in Switzerland, to specify the timezone Europe/Zurich is even better than specifying CET (even logically it is the same).
But typo mismatch isn't the only reason to change the timezone like Oracle mentions:
  • Mistaken value during deployment
  • New timezone values or definitions for business reasons
  • Change order to enforce (or disregard) daylight savings values (DST)
  • New usage of timezone dependent functionality including Grid Control, EM Agent or Dbconsole settings
  • Enforce business rules or auditing based on specific time / timezones
  • Reinstallation
At support.oracle.com there is a note regarding "How to change the Timezone in your ODA (Oracle Database Appliance) (Doc ID 1519975.1)", but unfortunately it is only for the ODA HA models.
If you try do to the first steps of the note, you will see, that "onecommand.params" at "/opt/oracle/oak/onecmd" does not exists and you also search unsuccessfully for the "GridInst.pl" command file, which should be located in the same directory.

We recently managed to change the timezone on the ODA lite model by following this approach,which is a mixture of plain Linux and Doc ID 1519975.1:

1.) Connect as root to the system

2.) Check the timezone of the system:

     [root@ODA1 ~]# date
     Tue Feb 20 10:05:49 UTC 2018

3.) Remove the symbolic link localtimezone from the /etc directory

    [root@ODA1 ~]# cd /etc
    [root@ODA1 etc]# rm localtime
    rm: remove symbolic link `localtime'? yes


4.) Create a new symbolic link "localtime" to the right timezone located at /usr/share/zoneinfo/*, in my example it is Europe/Zurich:

   [root@ODA1 etc]# ln -s /usr/share/zoneinfo/Europe/Zurich localtime


5.) Check if the timezone of the system is now correct:

   [root@ODA1 etc]# date
   Tue Feb 20 11:07:54 CET 2018

Now the node itself is running with the correct timezone and we can follow the approach of the Oracle note.

6.) The next step is to set the environment for all databases we do have using srvctl. Therefore you must connect as user "oracle", have set the environment (running oraenv) and change the timezone for the database:

   [oracle@ODA1 ~]$ . oraenv
   ORACLE_SID = [oracle] ? MYDB
   The Oracle base has been set to /u01/app/oracle 

  
   [oracle@ODA1 ~]$ srvctl setenv database -d MYDB_ODA1 -t TZ=Europe/Zurich


7.) Repeat srvctl setenv database for all databases you run on the ODA using the corresponding environment (oraenv). 


8.) Now you have to connect as grid user to the ODA and modify a file. The folder the file is located in is "/u01/app/<yourversion>/grid/crs/install" and the file is named "s_crsconfig_<yourODAhost>_env.txt.
For our environment we did the following:

   [grid@ODA1 ~]$ vi /u01/app/12.1.0.2/grid/crs/install/s_crsconfig_oda1_env.txt

In vi then change the "TZ=<value>" line, e.g. in our environment it was changing
   TZ=ECT
to
   TZ=Europe/Zurich.

You need to write the file with force (!wq) after you have changed the line.

9.) Next step is to restart the Grid Infrastructure, e.g. by rebooting the system.

10.) The last step needed to do is to change the environment for ASM also.
So again connect as grid user to the ODA and execute:

   [grid@ODA1 ~]$ srvctl setenv asm -t TZ=Europe/Zurich
   [grid@ODA1 ~]$ srvctl getenv asm
   asm:
   TZ=Europe/Zurich


That's it folks - you are done!

Edit March 2021:
- First:
On newer ODA versions you can also update the registry (internal database) to reflect the change.

If you run an odacli describe-system you will get the old timezone if you don't run:
# odacli update-registry -n system -f 

- Second: There is a bug that SYSDATE may be wrong after migrating from ODA Release 18.x to 19.x. Please check, using vi /u01/app/12.1.0.2/grid/crs/install/s_crsconfig_oda1_env.txt, that the timezone isn't specified with any ' ' or " " in the file. If so, change it back. Also check with srvctl getenv all listeners, databases and asm if the right time zone is still enabled. If you set them manually restart all parts affected by the change.