N
Fame Burst

How to extract and load data from an oracle database

Author

Mason Cooper

Updated on March 29, 2026

Microsoft’s Business Analytics Service Power BI, enables us to connect to hundreds of data sources and produce beautiful reports that can be consumed on the web and across mobile devices, in order to deliver insights throughout our entire organization. So,in this post, I will walk you through the process to get data in Power BI from Oracle Database.

When you open Power BI Desktop, you will see the following window:

How to Extract and Load Data From an Oracle Database

PowerBI welcome page

By clicking on Get Data (located at the top left of the window), the following window will appear. As we want to obtain the data from an Oracle Database, the only thing we have to do is to mark that Oracle Database option and click on the Connect button.

How to Extract and Load Data From an Oracle Database

How to connect Power BI to Oracle Database

After click on Connect button, the following window will appear:

How to Extract and Load Data From an Oracle Database

Connect to Oracle database

In the Server box, we will write the exactly the following text:
(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=XE)))
These instructions are written in the tnsnames.ora file, which is located in: C:oraclexeapporacleproduct11.2.0servernetworkADMIN
Note: if you do not have exactly that code, change it to get it in the same way!

How to Extract and Load Data From an Oracle Database

Once we have copied and pasted the text before, we have to ensure that the Import option is selected because we want to import directly the data from an Oracle database. Once we have all this right, we click on OK button.

How to Extract and Load Data From an Oracle Database

Oracle Database XE

After that, the following window will appear. Here we have to write our credentials and these credentials correspond to the user we created in the Oracle Database Express Edition 11g program:

How to Extract and Load Data From an Oracle Database

Oracle Database XE login

These are the data we introduced in:
Database Username: EXAMPLE_USER
Application Express Username: TEST
Password: test
Confirm Password: test

How to Extract and Load Data From an Oracle Database

Oracle db in PowerBI

So here, in our case, we have to introduce the following credentials:
Username: EXAMPLE_USER
Password: test

How to Extract and Load Data From an Oracle Database

Navigate Oracle database

Once you click on the Connect button, you will arrive at the following window. Therefore, you will be able to see the objects you have in your Oracle database:

How to Extract and Load Data From an Oracle Database

If you click on your schema (in our case, EXAMPLE_USER), we will see the same databases we have in Oracle.
Databases in PowerBI:

How to Extract and Load Data From an Oracle Database

Oracle Databases Navigator

How to Extract and Load Data From an Oracle Database

Load tables in PowerBI

If we click on some of them, we will able to load them in Power BI Desktop, as follows:

How to Extract and Load Data From an Oracle Database

After a few minutes of loading, the tables selected will appear:

How to Extract and Load Data From an Oracle Database

Tables loaded in PowerBI

Now, in the right part of the screen, we can see the loaded tables with all their data. In conclusion, this is the way to connect Power BI from Oracle Database.

If you want to improve your knowledge in Power BI, you can check out our course. ¡Feel free to enroll in it and enjoy learning!

Stay tuned for more news on our blog and subscribe to our newsletter if you want to receive our new posts in your mail, get course discounts… 🙂

I am working at SolidQ as Data Platform Specialist in BI projects since April 2017.

During my training I have always been focused to data, doing courses about SSIS, SSAS, SSRS and how to use Power BI Desktop, Management Studio, Visual Studio… delving into Power BI Desktop. Nowadays, I am working with customers with these tools applying all the best practices I have learnt.

  • 9445 Просмотров
  • Метки: нет (добавить)
1. Re: How to extract Oracle data into Excel file?
  • Мне нравится Показать отметки “Мне нравится” (0) (0)
  • Действия
2. Re: How to extract Oracle data into Excel file?

The “extract data from a table” part is easy, you could do that with VB/ADO, or .NET/ODP.NET. It’s then a matter of taking that data and appending it to a spreadsheet that might be the hard part, and how you’d do that exactly is really more of a Microsoft question than an Oracle one.

If you want to be able to do this from the database itself and your database is on Windows, you could use either [.NET Stored Procedures| if you can manipulate the spreadsheet in .net code, or you could also use Oracle’s [COM Automation Feature| if you’re handy with the COM object model for Excel.

How you’d do that exactly via either .net or com or vb is the crux of the problem and is something you’d need to know before it turns into an Oracle question, but if you already know how to do that and now just need to figure out a way to do that from Oracle, either of the above might help.

Devart Excel Add-in for Oracle allows you to connect Excel to Oracle databases, retrieve and load live Oracle data to Excel, and then modify these data and save changes back to Oracle. Here is how you can connect Excel to Oracle and load Oracle data to Excel in few simple steps.

To start linking Excel to Oracle, on the ribbon, click the DEVART tab and then click the Get Data button. This will display the Import Data wizard, where you need to create Excel Oracle connection and configure query for getting data from Oracle to Excel:

How to Extract and Load Data From an Oracle Database

1. Specify Connection Parameters

To connect Excel to Oracle database, you need to enter the necessary connection parameters in the Connection Editor dialog box. Two connection modes can be used in Excel Add-in for Oracle connections. The Direct connection mode allows connecting Oracle to Excel without any additional software. The OCI connection mode requires Oracle Client installed. The required connection parameters are different for different connection modes.

Direct Connection Mode

The following parameters are used for connecting Excel to Oracle database the Direct connection mode

  • Host – the DNS name or IP address of the Oracle server to which to connect. It also can accept a TNS descriptor or specify a secure protocol to use and optionally, a port after a colon.
  • SID – the unique name for an Oracle database instance.
  • Port – the number of a port to communicate with listener on the server. The default value is 1521.
  • User Id – your Oracle user name.
  • Password – your Oracle password.
  • Database – the name of SQL database to connect to Excel.
  • Connect as – allows opening a session with administrative privileges.

Direct mode also supports secure SSH and SSL connections. To enable use of SSH or SSL, you need to add the corresponding prefix to the Host parameter – ssh:// for the SSH protocol and tcps:// for SSL. Then you need to specify connection string parameters for the corresponding protocol in the Advanced parameters.

How to Extract and Load Data From an Oracle Database

OCI Connection Mode

To use the OCI connection mode for connection to an Oracle database, you should have Oracle Client software installed on your PC. Clear the Direct check box to work with Oracle Client.

In this mode the SID and Port settings are not used, and you need to specify the Oracle Home to use instead. Besides, in the Client mode, the Host parameter must specify the name of TNS alias of Oracle database to which to connect instead of the IP address or DNS name of the server. Specify the Oracle Client you want to be used in the Home connection option.

Advanced Connection Parameters

If you need to configure your Excel Oracle connector in more details, you can optionally click the Advanced button and configure advanced connection parameters. There you can configure secure SSH and SSL connections for the Direct made, fixed char data types trimming, Oracle proxy authentication (for the OCI mode only), Unicode settings, etc.

To check whether you have connected Excel to Oracle correctly, click the Test Connection button.

2. Select whether to Store Connection in Excel Workbook

You may optionally change settings how the connection and query data are stored in the Excel workbook and in Excel settings:

  • Allow saving add-in specific data in Excel worksheet – clear this check box in case you don’t want to save any Excel add-in specific data in the Excel worksheet – connections, queries, etc. In this case, if you want to reload data from Oracle to Excel or save modified data back to Oracle, you will need to reenter both the connection settings and query.
  • Allow saving connection string in Excel worksheet – clear this check box if you want your Oracle connection parameters not to be stored in the Excel. In this case you will need to reenter your connection settings each time you want to reload Oracle data or modify and save them to Oracle. However, you may share the Excel workbook, and nobody will be able to get any connection details from it.
  • Allow saving passwordit is recommended to clear this check box. If you don’t clear this check box, all the connection settings, including your Oracle password, will be stored in the Excel workbook. And anyone having our Excel Add-in for SQL Server and the workbook will be able to link Excel to the Oracle, get data from it, and modify them. But in this case you won’t need to reenter anything when reloading data from Oracle to Excel or saving them to Oracle.
  • Allow reuse connection in Excel – select this check box if you want to save this connection on your computer and reuse it in other Excel workbooks. It does not affect saving connection parameters in the workbook itself. You need to specify the connection name, and after this you will be able to simply select this connection from the list

3. Configure Query to Get Data

To import data from Oracle to Excel, you may either use Visual Query Builder to configure a query visually, or switch to the SQL Query tab and type the SQL Query. To configure query visually, do the following:

In the Object list select the Oracle table to load its data to Excel.

In the tree below clear check boxes for the columns you don’t want to import data from.

Optionally expand the relation node and select check boxes for the columns from the tables referenced by the current table’s foreign keys to add them to the query.

In the box on the right you may optionally configure the filter conditions and ordering of the imported data and specify the max number of rows to load from Oracle to Excel. For more information on configuring the query you may refer to our documentation, installed with the Excel Add-ins.

After specifying the query, you may optionally click Next and preview some of the first returned rows. Or click Finish and start data loading.

How to Extract and Load Data From an Oracle Database

After the data is loaded from Oracle to Excel spreadsheet, you can work with these data like with usual Excel worksheet. You can instantly refresh data from Oracle by clicking Refresh on the Devart tab of the ribbon, and thus, always have fresh live data from Oracle in your workbook.

If you want to edit Oracle data in Excel and save changes made in Excel to Oracle database, you need to click Edit Mode on the Devart tab of the ribbon first. Otherwise, the changes you make cannot be saved to Oracle.

After you start the Edit mode, you can edit the data as you usually do it in excel – delete rows, modify their cell values. Columns that cannot be edited in Oracle, will have Italic font, and you cannot edit values in these columns. To add a new row, enter the required values to the last row of the table that is highlighted with green.

How to Extract and Load Data From an Oracle Database

To apply the changes to actual data in the database, click the Commit button. Or click Rollback to rollback all the changes. Please note that the changes are not saved to the database till you click Commit, even if you save the workbook.

Once you have created a connection to an Oracle database, you can select data and load it into a Qlik Sense app or a QlikView document. In Qlik Sense , you load data through the Add data dialog or the Data load editor . In QlikView , you load data through the Edit Script dialog.

The Oracle Connector supports Direct Discovery . The SELECT statement can be edited in the Data load editor and the Edit Script dialog to create a DIRECT QUERY statement.

Qlik Sense : Oracle database properties

Shows the tables associated with the selected owner.

Selecting a table will cause the table fields to be displayed in the Data preview tab.

Shows a preview of the fields and data of the selected table.

Lists the fields in each selected table. If the table name check box is selected, all the fields in the table are automatically selected. If you click only the table name, fields are displayed but not selected. They can then be selected individually.

You can also change the field name by clicking on the existing field name. The new name is then used as an alias for the name the field has in the database.

Shows a field where you can enter filter criteria.

Adds a LOAD statement before the SELECT statement.

Opens the Associations view of the Data manager.

Allows you add more data sources, fix any errors in your data, and create table associations. This option is available only when you use Add data .

PropertiesDescription
OwnerShows the owners of tables in the database.
Tables
Data preview
MetadataShows a table of the fields and whether they are primary keys. Primary-key fields are also labeled with a key icon В® beside the field name.
Fields
Filter data
Filter fieldsDisplays a field where you can filter on field names
Hide script / Preview scriptShows or hides the load script that is automatically generated when table and field selections are made.
Include LOAD statement
Insert scriptInserts the script displayed in the Data preview panel into the script editor. This option is available only when you use the Data load editor .

QlikView : Oracle database properties

Shows the tables associated with the selected owner.

Selecting a table will cause the table fields to be displayed in the Preview tab.

Shows the fields associated with the selected table.

Hold down Crtl to select multiple fields. Use the drop-down menu to change the order of the fields.

Shows the data preview for the selected fields.

Use the Maximum rows drop-down menu to limit the number of lines included in the preview.

Adds a LOAD statement before the SELECT statement.

Discussion Board for collaboration related to QlikView App Development.

  • Qlik Community
  • :
  • Qlik Data Analytics Forums
  • :
  • QlikView
  • :
  • QlikView App Development
  • :
  • Extract Data From Oracle Database and append it to.
  • Subscribe to RSS Feed
  • Mark Topic as New
  • Mark Topic as Read
  • Float this Topic for Current User
  • Bookmark
  • Subscribe
  • Mute
  • Printer Friendly Page
  • Mark as New
  • Bookmark
  • Subscribe
  • Mute
  • Subscribe to RSS Feed
  • Permalink
  • Print
  • Email to a Friend
  • Report Inappropriate Content

I am attempting to read a table from Oracle and place the entire table into an existing QlikView table so that I can display the newly added data for a Customer. The end result is so that the Customer can run a new report and see the existing data which will include the new data which was recently added to Oracle. I have and OLEDB Connection to get into Oracle but need a little assistance in getting the data from Oracle into an existing QlikView application and then being able to work with the data once into QlikView.

  • Mark as New
  • Bookmark
  • Subscribe
  • Mute
  • Subscribe to RSS Feed
  • Permalink
  • Print
  • Email to a Friend
  • Report Inappropriate Content

You should be able to just extract it as you would from the Oracle database. Make a connection with your OLEDB driver and then use the Select wizard to create your query.

This will then automagically pull in the table to qlikview and you can manipulate it at will.

  • Mark as New
  • Bookmark
  • Subscribe
  • Mute
  • Subscribe to RSS Feed
  • Permalink
  • Print
  • Email to a Friend
  • Report Inappropriate Content

I currently have my connection through the Edit Module Section of my Application. I was trying to keep all of my logic in line due to numerous other processing that I have going on. How I can accomplish this in the Edit Module Section?

  • Mark as New
  • Bookmark
  • Subscribe
  • Mute
  • Subscribe to RSS Feed
  • Permalink
  • Print
  • Email to a Friend
  • Report Inappropriate Content

Using the macro editor (module section) complicates matters as you must then use VBScript (or JScript).

Qlikview is built to use the regular script editor to deal with all connections and have built in wizards for this.

If you must use the macro editor you will have to manually create VBScript connections (most likely using ADO and windows COM objects).

Have a look here for a rundown of this but I STRONGLY suggest you use the script editor instead of the macro editor.

  • Mark as New
  • Bookmark
  • Subscribe
  • Mute
  • Subscribe to RSS Feed
  • Permalink
  • Print
  • Email to a Friend
  • Report Inappropriate Content

Thanks for the timely response and the valuable information. I appreciate it.

  • Mark as New
  • Bookmark
  • Subscribe
  • Mute
  • Subscribe to RSS Feed
  • Permalink
  • Print
  • Email to a Friend
  • Report Inappropriate Content

I have another slight problem. While the user/Customer is in QlikView, they are able to add in new data through an input box for a particular table. Currently, in the application, It is structured so that when they query for a report, they have the ability to choose another button which will add in new data for a particular table. Right now I am sending that new data out to an Oracle table. Now I need to be able to bring in that new data back into my QlikView application and append the data to a table which was created in the initial load script. The customer needs to be able to run another report like an hour later and the new data which was written out to Oracle earlier needs to show up in the report that the Customer runs. An easy way to sum it up, how do I take data out from an Oracle table and append it to an existing table in QlikView and be able to do this several times a day.

How to Extract and Load Data From an Oracle DatabaseIn this post, I’ll explain how to download, install, and set up an Oracle Database on your own computer.

Requirements to Install and Set Up Oracle Database

To be able to run the Oracle database on your computer, you’ll need:

  • Internet access to download the required files (or the files downloaded to be used offline)
  • Windows or Linux operating system. At the moment, there is no way to install Oracle on a Mac, unless you use a virtual machine such as Parallels.

Step 1 – Download Oracle Client

The steps to download Oracle Database 11g Express Edition are:

  1. Visit the Oracle website at oracle.com.
  2. Go to the Downloads menu at the top.
  3. Select Oracle Database 11g Express Edition.
  4. Read and accept the license agreement
  5. Click on the link to the relevant version (Windows or Linux).
  6. Enter your Oracle account details (or sign up for one if you don’t have one already)
  7. Select the save location and press Save.

You can view the video here:

Step 2 – Install Oracle Client

To install the Oracle database client, follow these steps:

  • Extract the file that was downloaded
  • Run the “setup.exe” file
  • Click Next on the Welcome screen
  • Accept the terms and conditions
  • Select your installation directory
  • Enter a password to use for both the SYS and SYSTEM database accounts. You will need this for logging in to the database later.
  • Click Install after reading the summary.

You can view the video here:

Step 3 – Download Java JDK

The steps to download the Java JDK are:

  1. Visit the Oracle website at oracle.com.
  2. Hover over the top menu and click on Downloads
  3. Click on Java for Developers
  4. Click on the Java Platform JDK. The JDK download page is shown.
  5. Read and accept the license agreement.
  6. Scroll down to find the installation files for the JDK for various operating systemsand file types.
  7. Choose your preferred file type and download.
  8. Save the file in your preferred destination.

You should now be able to install the JDK on your computer.

PS: I’m aware that the J in JDK stands for Java, and it’s a bit strange calling it the Java JDK as that actually means “Java Java Development Kit”, but it’s a bit easier for some to recognise if it’s mentioned as Java JDK.

You can view the video here:

Step 4 – Install Java JDK

The steps for how to install Java JDK are:

  1. Run the file downloaded in the previous step (e.g. “jdk-7u25-windows-x64.exe”).
  2. Click Run if a security warning appears.
  3. Click Next on the Welcome screen
  4. Click Next on the Features screen (the default features are OK). The installation directory can remain at the default value.
  5. Wait for the JDK to be installed (approx 30-40 seconds).
  6. Accept the default installation for the Java Runtime Environment (JRE) and click Next.
  7. Wait for the JRE to be installed.
  8. Click Close.

Java JDK should now be installed on your computer.

You can view the video here:

Step 5 – Download Oracle SQL Developer

To download Oracle SQL Developer, which lets you run queries on the database, follow these steps:

  1. Go to oracle.com
  2. Select the Downloads tab, then under Developer Tools, click on SQL Developer.
  3. Accept the license agreement.
  4. Select a download link for your operating system.
  5. Save the file to a location on your computer
  6. Browse to the saved location
  7. Extract the ZIP file using a compression program (WinZIP, WinRAR, 7ZIP, etc)
  8. Close the ZIP file.
  9. Open the extracted folder.

The Oracle SQL Developer application can be run by opening sqldeveloper.exe.

You can view the video here:

Step 6 – Set Up Oracle SQL Developer

To set up Oracle SQL Developer for use, follow these steps:

  • Open the “sqldeveloper.exe” file
  • Click on Browse for the java.exe pathname
  • Browse to the location that the Java JDK was installed to (which may be C:Program FilesJavajdk1.7.0_25bin)
  • Open Java.exe
  • Click OK
  • Click Yes if a warning for a certified version appears. I haven’t noticed any issues with this.
  • Click Yes if another message for version incompatibility appears.
  • Select the file types to associate with SQL Developer. I’ve selected all of them.

You can view the video here:

Step 7 – Create Connection in Oracle SQL Developer

To create a connection to your database from Oracle SQL Developer so you can run queries, follow these steps:

  • Click on the green + icon in the top left
  • Enter whatever your like as the connection name. I’ve used “Local”. This is the value that is displayed for this connection iwthin the application.
  • For username, use SYSTEM
  • For password, enter the same password as you did during the installation
  • Leave all of the other fields the same
  • Click on Test. The status should show “Status: Success”, which means the details are correct
  • Click Connect.

The connection will be created and an connected to.

SQL developer allows you to import and export connections, so you can ask a coworker to export theirs if you’re working on the same data.

You can view the video here:

Well I hope this has given you some guidance on how to download, install, and set up an Oracle database on your computer. If you have any questions, let me know!

There are some other things you may want to do now you have SQL Developer set up:

4 Comments

Conglatulations for the good explanation on Oracle installation, it has been an headache for lots of people. Glad to have opened some people eyes!!

Same I am trying to do but not working for dynamic parameter.
Can you please help me.

In Connection Properties screen, parameter button is disabled. Not sure, how to enable.

Mail Id : [email protected]

Can you please help on this.

PropertiesDescription
OwnerShows the owners of tables in the database.
Objects
ScriptShows the script for the current selection.
Preview
Preceding load
Sign In· View Thread
Last Visit: 10-Aug-20 3:29 Last Update: 10-Aug-20 3:29Refresh1

General News Suggestion Question Bug Answer Joke Praise Rant Admin

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

Our enterprise plans to move our on-premises databases to the cloud. What’s the best method to migrate data from Oracle Database into an AWS EC2 instance?

How to Extract and Load Data From an Oracle Database

Because few enterprises perform public cloud deployments from scratch, chances are high they have Oracle Database.

Continue Reading This Article

Enjoy this article as well as all of our content, including E-Guides, news, tips and more.

running on premises. And moving those on-premises databases to cloud has its benefits. But an Oracle Database migration to AWS depends on a variety of factors.

Before undergoing an Oracle Database migration, IT teams must consider the size of the database, available network bandwidth between the local data center and AWS and which software edition is running in the data center. Business needs, such as the amount of time available to perform the move, are also important factors.

An Oracle Database migration could be accomplished in one step, but this requires a complete shutdown of the local database in order to extract and migrate the data to the new database in AWS. The process can take anywhere from one to three days, so this can be the most obtrusive migration strategy. Single-step database migrations are generally preferred for small businesses with limited database sizes that can tolerate prolonged database downtime during the migration.

Two-step migration strategies are common. The first step produces a point-in-time copy of the existing database, which can be moved to AWS without imposing any downtime on the local database. The local database continues to run during this process, so the actual migration can take as long as necessary — there is almost no tangible disruption.

After the initial Oracle Database migration, a second step will capture, migrate and synchronize any incremental changes to the database. Once completed, the local Oracle Database will need to be shut down while the final changes are captured and migrated. This incremental step is considerably less involved than the initial synchronization, so the downtime is shorter. Once the final synchronization is complete, the AWS deployment takes over and the local database is decommissioned.

A third option promises zero downtime. This typically starts with an initial synchronization and then invokes a form of continuous data replication (CDR) to perform complete synchronization of the local and AWS database versions. A variety of tools, including Oracle’s GoldenGate or third-party tools like Dbvisit Replicate and Attunity Replicate, can handle CDR. A business can make the switch to AWS once replication has synchronized the local and AWS databases. The CDR tool will continue to keep the instances synchronized. This option is typically reserved for the largest or most active Oracle database users who cannot tolerate any downtime. However, there is an added cost to use CDR, and continuous replication can potentially affect database or network performance.

What licenses do we need to run Oracle Database on AWS?

To access your data stored on an Oracle database, you will need to know the server and database name that you want to connect to, and you must have access credentials. Once you have created a connection to an Oracle database, you can select data from the available tables and then load that data into your app or document.

In Qlik Sense , you connect to an Oracle database through the Add data dialog or the Data load editor .

In QlikView you connect to an Oracle database through the Edit Script dialog.

Setting up the database properties

The Oracle Connector requires additional configuration for the TNS service. The following two TNS files must be modified for the Oracle environment. Sample versions of these files are available in the sample directory at the end of the path shown below for each of the Qlik Sense and QlikView locations. Or you can get the files from your Oracle configuration.

  • tnsnames.ora : set the host addresses and ports for the TNS service.
  • sqlnet.ora : set the location of the Oracle wallet.

These files must be placed in one of the following locations for Qlik Sense and QlikView .

Qlik Sense Desktop :

%USERPROFILE%AppDataLocalProgramsCommon FilesQlikCustom DataQvOdbcConnectorPackageoraclelibnetworkadmin

Qlik Sense Enterprise :

C:Program FilesCommon FilesQlikCustom DataQvOdbcConnectorPackageoraclelibnetworkadmin

C:Program FilesCommon FilesQlikTechCustom DataQvODBCConnectorPackageoraclelibnetworkadmin

Database propertyDescriptionRequired
Host nameHost name to identify the location of the Oracle databaseyes unless using TNS Service Name
PortServer port for the Oracle databaseyes unless using TNS Service Name
Service nameThe alias name for the TNS serviceyes unless using TNS Service Name
Use TNS Service nameUse the TNS name defined in the tnsnames.ora file instead of the TNS alias name.no
TNS NameThe TNS name defined in the tnsnames.ora file.no

Authenticating the driver

Qlik Sense : Oracle authentication properties

Name of the Oracle connection

The default name will be used if you do not enter a name.

QlikView : Oracle authentication properties

Authentication propertyDescription
UsernameUser name for the Oracle connection
PasswordPassword for the Oracle connection
Name
Authentication propertyDescription
UsernameUser name for the Oracle connection
PasswordPassword for the Oracle connection

Using Advanced options

Name of additional properties

You can add more than one additional properties.

Value of additional properties

You can add more than one additional properties.

See Oracle documentation for a complete list of options that can be set in the Advanced settings fields.

Last updated on DECEMBER 16, 2019

Applies to:

Oracle Database – Enterprise Edition – Version 9.2.0.1 and later
Oracle Rdb Server on OpenVMS – Version 6.1 and later
Oracle Database Cloud Schema Service – Version N/A and later
Oracle Database Exadata Cloud Machine – Version N/A and later
Oracle Cloud Infrastructure – Database Service – Version N/A and later
HP OpenVMS VAX
HP OpenVMS Alpha

Purpose

Load data stored on Oracle RDBMS database(s) to an Oracle Rdb database using SQLPLUS and RMU/LOAD

Scope

Users transferring data from Oracle Server to Oracle Rdb

Details

To view full details, sign in with your My Oracle Support account.

Don’t have a My Oracle Support account? Click to get started!

My Oracle Support provides customers with access to over a million knowledge articles and a vibrant support community of peers and Oracle experts.

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit oracle.com. В© Oracle | Contact and Chat | Support | Communities | Connect with us | | | | Legal Notices | Terms of Use

By: Tim Smith | Updated: 2015-02-03 | Comments (5) | Related: More > PowerShell
Problem

We’d like to minimize using third party tools to extract data from APIs to load into SQL Server and wanted to know if we could implement solutions with PowerShell without too much overhead.

Solution

Yes, we can extract API data using PowerShell similar to how we can extract the same data in C#. Generally, we will get the data in XML or JSON (in some cases, we’ll directly parse HTML) and add the data into SQL Server. We can also bypass some tools that may add additional overhead (or loading) to get these data. The below is a guide to extracting data from an API, in this case using FRED’s West Texas Intermediate and provides an overall guideline of how to do so with other APIs.

Always Read the Documentation

Always read the API documentation because it will tell you what data are provided, how to access the data, what type of format it will provide, and will sometimes provide an example. In the case of FRED, we can read its documentation to get a feel for what we’ll be extracting, specifically how the data and topics are structured. Also, if the API is not well known, this documentation will also provide us with ideas about how to validate before we add the data – we don’t want to add anything into our database. Finally, many APIs, like FRED, will require an API key, and developers often need to register for a key in order to proceed. This example does not provide a key, so if you want to emulate the example, you can register at FRED to get your free API key.

Converting Formats

After reading the documentation, we know that we want the West Texas Intermediate in the JSON format. In order to read JSON into SQL Server, we will need to convert the JSON syntax into a TSQL syntax (some developers may convert the JSON objects into a data table and perform a SQL Bulk Copy); in this case, I use a T-SQL syntax for easy debugging, but other methods are available. Also, keep in mind that if you’re using a document oriented database, like MongoDB, you can perform a direct insert and bypass converting data. Below is a simple example of a JSON document that we convert from and store in the object $converted; note how the fields become the object’s properties:

Keep in mind that when we pull data from an API, we must convert the data to the appropriate format, so if it’s XML, JSON, or HTML, we must convert these, unless we intend to store the data directly in the database (I would add a data validation method before this in that case).

Data Recon

We now know what we want to get and how to convert the data; now we need to build our output. First, we want the dates and values of West Texas Intermediate, so we’ll build a table with two columns:

Also, we need a quick PowerShell function that will execute T-SQL and the below script does that (we provide the server and database name and the T-SQL text to execute):

Now, we need to extract the JSON string. Microsoft provides us with a useful class, the Web Client class, for working with data. It comes with a method called DownloadString and this method can store a string in an object. Like creating new connections and commands, we’ll create a new object that will allow us to use this class:

The $webget becomes our object where we can call the method, DownloadString, to get the data from an API call (in this case, the API call string is $webstring and note that the API key (stored in $apikey) is not provided because you’ll need to register to get one with FRED. Once we do this, and Write-Host $result we’ll see a large JSON document that we need to convert:

The above converts the JSON and stores it back in the $result object; also, I’ve set up an empty array which will store the insert statements eventually. Once we convert $result from JSON, we’ll see (when we print $result) that the date we’re seeking is stored in the sub-document “observations”; this means that in each value of $result.observations we’ll find the date and value fields that we want to store in our database. That means that we need each date and each value of $result.observations, so we’ll perform our loop:

This looks good, but we have a problem: we haven’t validated our data yet, and if we look at our data, we’ll notice that on Christmas day of 2014 the value is “.” because West Texas Intermediate wasn’t reported. We want to take the time to validate our data here; while FRED is trusted and used very frequently, I still like to think about data validation, such as max lengths, invalid values, etc. For this article, I’ll only validate the invalid values:

Now, outside the loop, we add the data to our database:

And review the results .

How to Extract and Load Data From an Oracle Database

As we can see, extracting API data in PowerShell does not require too much, as this will look similar even if we were to obtain other JSON documents. We will find that most of the changes come when we choose if we want (1) all the fields in a document, (2) some of the fields, (3) fields in one or more subdocuments, (4) and a few other possible combinations. In those cases, it’s matter of understanding the JSON (or XML/HTML) structure and getting what’s needed – the way that we access and add the data remains the same. Finally, remember that developers can use data tables if they prefer and convert accordingly; different environments have different limits, so building something that matches our environment will reduce the probability of performance issues.

Next Steps
  • Study and review an API that you’d like to extract data from; keep in mind that some APIs may charge, so for testing, pick a free one (when it’s operating, Bitstamp is a good starter).
  • Test converting a JSON string and obtaining the fields you want.

Oracle Database Tips by Donald Burleson

OptionDescriptionRequired
no
Value

Question: I need to offload a huge Oracle database onto flat files and export is too slow. I also find the import utility (imp and impdp) too slow. What are other options for fast data unloading and reloading from Oracle?

Answer: There are many tool, many vendors, and many techniques. I recommend the book “Oracle Utilities” for tips on improving Oracle unload/reload performance and my book “Oracle Tuning: The Definitive Reference” for specific tuning tips for high I/O operations. Also see:

  • Data Pump – Here are my tips for speeding up Oracle expdp (Data Pump). You can also tune the import utility (impdb) for faster performance.
  • Solid-state disk – SSD is hundreds of times faster than disk, and it’s perfect for super-fast unload/reload migrations.
  • SQL Loader – You can make SQL*Loader run very fast with tuning tips.
  • Database to database – Here is a clever technique using Linux with direct export/import, with no intermediate flat files.
  • CTAS – You can use parallelized “create table as select” over a high-speed database link to quickly move tables between Oracle databases.
  • Fact & CoSort – Fast Extract (FACT) for Oracle unloads large tables in parallel to flat files. FACT also writes the file layout metadata that CoSort can use for reload (reorg, ETL) pre-sorts, plus join and aggregate transforms, report generation, field-level security, etc. For more information about these tools click here.
  • Vendor tools – Vendors such as BMC and Wisdomforce offer tools for super-fast table unloading from Oracle. Also, unload/reloads happen super-fast on SSD hardware.
How to Extract and Load Data From an Oracle DatabaseIf you like Oracle tuning, see the book “Oracle Tuning: The Definitive Reference”, with 950 pages of tuning tips and scripts.

You can buy it direct from the publisher for 30%-off and get instant access to the code depot of Oracle tuning scripts.

How to Extract and Load Data From an Oracle Database

Burleson is the American Team

How to Extract and Load Data From an Oracle Database

Note: This Oracle documentation was created as a support and Oracle training reference for use by our DBA performance tuning consulting professionals. Feel free to ask questions on our Oracle forum .

Verify experience! Anyone considering using the services of an Oracle support expert should independently investigate their credentials and experience, and not rely on advertisements and self-proclaimed expertise. All legitimate Oracle experts publish their Oracle qualifications.

Errata? Oracle technology is changing and we strive to update our BC Oracle support information. If you find an error or have a suggestion for improving our content, we would appreciate your feedback. Just e-mail:

and include the URL for the page.


Burleson Consulting

The Oracle of Database Support

Copyright © 1996 – 2020

All rights reserved by Burleson

Oracle ® is the registered trademark of Oracle Corporation.

by Rahul Bhattacharya

Sunday, October 23, 2016

How to Extract and Load Data From an Oracle Database

Exporting and Importing table data from Oracle database to Hive and vice-versa is one of the most common activities in the world of Hadoop. It is essential to get sorted out on a few basics for seamless first time integration so as to avoid various parsing and loading errors.

We will be doing the below activities sequentially so as to cover all the integration points between Oracle database, Sqoop, HDFS and Hive.

Step 1: Extract data from a source Oracle database table to Hadoop file system using Sqoop
Step 2: Load the above Sqoop extracted data into a Hive table
Step 3: Use Hive query to generate a file extract in the Hadoop file system
Step 4: Load the generated file in Step 3 to a new target Oracle database table

Step 1: Sqoop import data from Oracle database to Hive table

Our first task is to identify our source Oracle database table, and then use Sqoop to fetch the data from this table to HDFS using Sqoop.

SQOOP IMPORT –connect “jdbc:oracle:thin:@ >: >: >” –password ” >” –username ” >” –table ” >. >” –columns ” >, >” -m 1 –target-dir ” HDFS path >>” –verbose

It is interesting to observe that we need to identify a primary key for the source Oracle database table. Else we will get the error “Error during import: No primary key could be found for table”. If we want to skip assigning a key we can include the highlighted parameter -m 1.

Step 2: Load the above Sqoop extracted data to a Hive table

Assuming we already have a table created in Hive, we will load the file created in Step 1 into the Hive table using the below syntax.

LOAD DATA INPATH ‘ HDFS path >> ‘ INTO TABLE hive table name >>;

Step 3: Export a file using Hive query to be consumed by Sqoop

Now that we have the data in our Hive table, we will use the below command to create a file using a custom Hive query, in the green highlighted path. The delimiter highlighted in yellow can be changed according to our requirement – but accordingly it must be changed in Step 4 also where it’s highlighted in yellow.

INSERT OVERWRITE DIRECTORY ‘ >/ > ‘
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ‘,’
select * from hive table name >> where >;

Step 4: Load data from Hive table exported file to Oracle database table

The below command will use the above Hive exported file (from the same green highlighted path) to load our target Oracle database table.

SQOOP EXPORT –connect “jdbc:oracle:thin:@ >: >: >” –password ” >” –username ” >” –table ” >. >” –input-fields-terminated-by ‘,’ –input-lines-terminated-by ‘n’ –export-dir ” >/ >” –input-null-string “N” –input-null-non-string “N” –verbose

The orange and blue highlighted sections above helps reading the records while exporting to the target database table. Else we might sometimes encounter the error “Can’t parse input data”.

Thus we have successfully loaded a table from Oracle database to Hive and again back from Hive to Oracle database, using Sqoop. We can query our Oracle and Hive databases to check the data if it is loaded fine. How do you prefer to load your data between these two systems?

Answers were Sorted based on User’s Feedback

by using the cursor we can take the data from ims data base
and store it in a flat file.then using the application
program read the records from flat file and insertb in
oracle,

Is This Answer Correct ?0 Yes 0 No

We can read a data from IMS DB by through application
program then through SFTP we can convert that data into flat
file and load it into oracle.

Is This Answer Correct ?0 Yes 0 No

Tandem COBOL Interview questions (on TACL,SCOBOL,ENSCRIBE,PATHWAY0

can u run online and batch at the same time?

List the type of locks?

How are composite and multiple indexes different?

What is defragmentation and what is its advantage?

Can you explain create an empty file by using sort?

how do we prepare test data using file-aid tool?

What is distributed relational database architecture? And components?

What is the need to code commits in batch programs?

name the vsam file in which the deletion is not possible.

What is linkage section?

Is the order of the when clause significant in an evaluate statement?

  • COBOL (1065)
  • JCL (1026)
  • CICS (755)
  • DB2 (1119)
  • IMS (143)
  • IDMS (107)
  • Natural (81)
  • ADABAS (26)
  • REXX (60)
  • Assembler (68)
  • CLIST (6)
  • QMF (32)
  • MVS (25)
  • OS390 (10)
  • OS 2 (7)
  • VSAM (319)
  • QSAM (8)
  • Sysplex (9)
  • IBM MainFrame AllOther (353)

Business Management Interview Questions:: Banking Finance, Business Administration, Funding, Hotel Management, Human Resources, IT Management, Industrial Management, Infrastructure Management, Marketing Sales, Operations Management, Personnel Management, Supply Chain Management.

Engineering Interview Questions :: Aeronautical, Automobile, Bio, Chemical, Civil, Electrical, Electronics Communications, Industrial, Instrumentation, Marine, Mechanical, Mechatronics, Metallurgy, Power Plant.

Visa Interview Questions :: USA Visa, UK Visa, Australia Visa, Canada Visa, Germany Visa, New Zealand Visa.

September 1, 2008 at 7:06 am

I wondered if I could get some advice from people. I need to extract data from an Oracle 8i database which is sat on a Unix server, and load it onto a new database on SQL Server 2005. I am thinking my best option is to create a linked server, then create a SSIS package to import data into the new database. Problem is I have no idea how to create a linked server using a Unix platform. I have linked servers on SQL 2000 that use databases on Windows platforms but none on SQL 2005 using Unix. Does anyone know how I go about this?

Secondly, if anyone can tell me a different and better way of achieving my goal, I’d like to hear about that too. The only other option I am aware of is to go to the Oracle side and extract the data using sqlplus into flat files. I think that will take a long time to format etc.

So any help will be much appreciated!

September 1, 2008 at 7:20 am

Forgot to say, I know there are a lot of posts around this topic in this forum, and I don’t wish for people to repeat everything, but I couldn’t find anything specific to Unix, so felt the need to start a new post. Hope no-one minds!

September 1, 2008 at 5:11 pm

Same topic is discussed here

Regarding Unix, there is no difference in procedures whether the Oracle instance is on Unix or windows. You are dealing with the Oracle database only not the OS.

———————————————————–[font=Arial Black]Time Is Money[/font][font=Arial Narrow] Calculating the Number of Business Hours Passed since a Point of Time[/url][/font][font=Arial Narrow]Calculating the Number of Business Hours Passed Between Two Points of Time [/font]

September 2, 2008 at 7:40 am

I try to avoid loading SSIS whenever it’s practice. If all you want to do is extract data from one table/view into a SQL Server table then my suggestion would be to use the OpenQuery method. Example:

from openquery(LinkedServerName, ‘select col1, col2.. from oracleTableOrView’)

I’ve found that to be the most effective route but this would also work.

SELECT COL1, COL2.

Please note that the syntax above is case sensitive on the Oracle side of the house.

This flow extracts data from a web service and loads into a relational database.

Step 1. Create a source HTTP connection.

How to Extract and Load Data From an Oracle Database

Step 2. Create a destination connection for the relational database. Test it.

How to Extract and Load Data From an Oracle Database

Step 3. Create a format for the payload returned by web service, most likely JSON or XML.

How to Extract and Load Data From an Oracle Database

Step 4. Link connection and format and test the response for the web service in the Explorer.

Step 5. It is most likely that the response is a nested JSON or XML so learn how to flatten the nested dataset.

How to Extract and Load Data From an Oracle Database

Step 6. Create a new web service to database flow.

How to Extract and Load Data From an Oracle Database

Step 7. Add a new transformation and select from connection, format, and endpoint and to connection and table.

How to Extract and Load Data From an Oracle Database

Step 8. Click the MAPPING button and configure the “flattening” for the nested JSON or XML. Most likely it is going to be a Source SQL.

How to Extract and Load Data From an Oracle Database

Step 9. Test the transformation. Make sure it returns the flat data set.

How to Extract and Load Data From an Oracle Database

Step 10. Add per-field mapping if needed.

How to Extract and Load Data From an Oracle Database

Step 11. Configure MERGE (UPSERT) if needed.

How to Extract and Load Data From an Oracle Database

Step 12. Save the flow and execute it manually.

Step 13. Schedule the flow to be executed periodically.

I think Hello World of Data Engineering to make an one-to-one copy of a table from the source to the target database by bulk-loading data. The fastest way to achieve this is exporting a table into a CSV file from the source database and importing a CSV file to a table in the target database. With any database, importing data from a flat file is faster than using insert or update statements.

To connect ODBC data source with Python, you first need to install the pyodbc module. Obviously, you need to install and configure ODBC for the database you are trying to connect.

Let’s load the required modules for this exercise. The code here works for both Python 2.7 and 3.

Exporting table to CSV

The function below takes a select query, file path for exported file and connection details. The best practice is to turn on autocommit. Some ODBC will give you an error if this parameter is not there.

def table_to_csv ( sql , file_path , dsn , uid , pwd ) :
”’
This function creates csv file from the query result with ODBC driver
”’
try :
cnxn = pyodbc. connect ( ‘DSN=;UID=;PWD=‘
. format ( dsn , uid , pwd ) , autocommit = True )
print ( ‘Connected to ‘ . format ( dns ) )
# Get data into pandas dataframe
df = pd. read_sql ( sql , cnxn )
# Write to csv file
df. to_csv ( file_path , encoding = ‘utf-8’ , header = True ,
doublequote = True , sep = ‘,’ , index = False )
print ( “CSV File has been created” )
cnxn. close ( )

except Exception as e:
print ( “Error: ” . format ( str ( e ) ) )
sys . exit ( 1 )

The execution example is exporting the city table to a csv file from MySQL. ODBC is set up with MySQL_ODBC as DSN.

Load Table from CSV

The function below takes a csv upload query and connection details to import CSV to a table. Autocommit should be turned on. The local_infile parameter helps MySQL’s LOAD DATA INFILE commands. It may not relevant for other databases. If the parameter is not relevant in the connection for the specific database, it will ignore it. So, you can keep the local_infile parameter for other databases.

def load_csv ( load_sql , dns , uid , pwd ) :
”’
This function will load table from csv file according to
the load SQL statement through ODBC
”’
try :
cnxn = pyodbc. connect ( ‘DSN=;UID=;PWD=‘
. format ( dns , uid , pwd ) , autocommit = True , local_infile = 1 )
print ( ‘Connected to ‘ . format ( dns ) )
cursor = cnxn. cursor ( )
# Execute SQL Load Statement
cursor. execute ( load_sql )
print ( ‘Loading table completed successfully.’ )
cnxn. close ( )

except Exception as e:
print ( “Error: ” . format ( str ( e ) ) )
sys . exit ( 1 )

Let’s load the data exported with the first function into both MySQL and PostgreSQL databases. Each database has SQL syntax for this and you need to pass the statement to the function. MySQL uses the LOAD DATA INFILE command while Postgres uses the copy command.

The world as seen by a fish… another blog by Azahar Machwe

How to Extract and Load Data From an Oracle Database

Getting your data from its source (where it is generated) to its destination (where it is used) can be very challenging, especially when you have performance and data-size constraints (how is that for a general statement?).

The standard Extract-Transform-Load sequence explains what is involved in any such Source -> Destination data-transfer at a high level.

We have a data-source (a file, a database, a black-box web-service) from which we need to ‘extract’ data, then we need to ‘transform’ it from source format to destination format (filtering, mapping etc.) and finally ‘load’ it into the destination (a file, a database, a black-box web-service).

In many situations, using a commercial third-party data-load tool or a data-loading component integrated with the destination (e.g. SQL*Loader) is not a viable option. This scenario can be further complicated if the data-load task itself is a big one (say upwards of 500 million records within 24 hrs.).

One example of the above situation is when loading data into a software product using a ‘data loader’ specific to it. Such ‘customized’ data-loaders allow the decoupling of the products’ internal data schema (i.e. the ‘Transform’ and ‘Load’ steps) from the source format (i.e. the ‘Extract’ step).

The source format can then remain fixed (a good thing for the customers/end users) and the internal data schema can be changed down the line (a good thing for product developers/designers), simply by modifying the custom data-loader sitting between the product and the data source.

In this post I will describe some of the issues one can face while designing such a data-loader in Java (1.6 and upwards) for an Oracle (11g R2 and upwards) destination. This is not a comprehensive post on efficient Java or Oracle optimization. This post is based on real-world experience designing and developing such components. I am also going to assume that you have a decent ‘server’ spec’d to run a large Oracle database.

Preparing the Destination

We prepare the Oracle destination by making sure our database is fully optimized to handle large data-sizes. Below are some of the things that you can do at database creation time:

– Make sure you use BIGFILE table-spaces for large databases. BIGFILE table-spaces provide efficient storage for large databases.

– Make sure you have large enough data-files for TEMP and SYSTEM table-space.

– Make sure the constraints, indexes and primary keys are defined properly as these can have a major impact on performance.

For further information on Oracle database optimization at creation time you can use Google (yes! Google is our friend!).

Working with Java and Using JDBC

This is the first step to welcoming the data into your system. We need to extract the data from the source using Java, transforming it and then using JDBC to inject it into Oracle (using the product specific schema).

There are two separate interfaces for the Java component here:

1) Between Data Source and the Java Code

2) Between the Java Code and the Data Destination (Oracle)

Between Data Source and Java Code

Let us use a CSV (comma-separated values) format data-file as the data-source. This will add a bit of variety to the example.

Using the ‘BufferedReader’ (java.io) one can easily read gigabyte size files line by line. This will work best if each line in CSV contains one data row thereby we can read-process-discard the line. Not requiring to store more than a line at a time in memory, will allow your application to have a small memory footprint.

Between the Java Code and the Destination

The second interface is where things get really interesting. Making Java work efficiently with Oracle via JDBC. Here the most important feature while inserting data into the database, that you cannot do without, is batched prepared statement. Using Prepared Statements (PS) without batching is like taking two steps forward and ten steps back. In fact using PS without batching can be worse than using normal statements. Therefore always use PSs, batch them together and execute them as a batch (using executeBatch method).

A point about the Oracle JDBC drivers, make sure the batch size is reasonable (i.e. less than 10K). This is because when using certain versions of the Oracle JDBC driver, if you create a very large batch, the batched insert can fail silently while you are left feeling pleased that you just loaded a large chunk of data in a flash. You will discover the problem only if you check the the row count in the database, after the load.

If the data-load involves sequential updates (i.e. a mix of inserts, updates and deletes) then also batching can be used without destroying the data integrity. Create separate batches for the insert, update and delete prepared statements and execute them in the following order:

  1. Insert batches
  2. Update batches
  3. Delete batches
  1. Obviously the quickest option, in terms of our data-load, is to drive the truck through the gates (CoPs disabled) and dump the cargo (data) at the destination, without stopping for a check at the gate or after unloading (CoPs enabled for future changes but existing data not validated). This is only possible if the contract with the data-source provider puts the full responsibility for data accuracy with the source.
  2. The slowest option will be if the truck is stopped at the gates (CoPs enabled), unloaded and each cargo item examined by the gatekeepers (all the inserts checked for CoPs violations) before being allowed inside the destination.
  3. A compromise between the two (i.e. the middle path) would be to allow the truck to drive into the destination (CoPs disabled), unload the truck and at the time of transferring the cargo to the destination, check the it (CoPs enabled after load and existing data validated).

One thought on “ Efficient Data Load using Java with Oracle ”

Good summation of ETL (IMHO) – thank you.
Do you have any thought son XSLT for the transform method?

Leave a Reply Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This article demonstrates how to import data into an Autonomous Data Warehouse (ADW) or Autonomous Transaction Processing (ATP) service on the Oracle Cloud using the impdp utility.

The examples in this article are based on the Autonomous Data Warehouse (ADW), but the same method works fine for the Automated Transaction Processing (ATP) service too.

Export Your Existing Data

We have a schema called TEST in an Oracle 18c instance on Oracle Database Cloud Service (DBaaS). The schema has two tables (EMP and DEPT), which we want to transfer to the Autonomous Data Warehouse (ADW) or Autonomous Transaction Processing (ATP).

Create a directory object.

Export the schema. The ADW documentation suggests the EXCLUDE and DATA_OPTIONS options in the example below. These options are not necessary for ATP service. For such a small import it is silly to use the PARALLEL clause, but we want to produce multiple dump files.

This resulted in the following dump files.

These dump files were uploaded to an AWS S3 bucket.

I found I had to set the version=12.2 option during the export or I would receive the following error during the import into ADW and ATP.

It would appear there is something interesting about the version of 18c used for ADW.

Object Store Credentials

We need to create a credential containing a username and password for the connection to the object store. If you are an AWS S3 bucket the username and password are as follows.

  • username : AWS access key
  • password : AWS secret access key

The credentials are dropped and created using the DROP_CREDENTIAL and CREATE_CREDENTIAL procedures of the DBMS_CLOUD package respectively.

Import Data from S3

For the import to work you will have to make a connection from an Oracle client to the ADW database. You can see the necessary setup to do this here.

From an 18c client we can issue the following type of import. The CREDENTIAL option specifies the object store credential to be used for the import. The DUMPFILE option specifies the URIs of the dump files in the object store. The TRANSFORM and EXCLUDE options are the recommended settings in the ADW documentation, but are not necessary for the ATP service. In this case we are using REMAP_SCHEMA to place the objects into a schema called MY_USER on ADW.

From a 12.2 or earlier client we need to set the default credential for the database on ADW.

We can issue the following type of import. The CREDENTIAL option isn’t used and instead the “default_credential:” prefix is used before each object store URI. The rest of the parameters are the same as the previous example.

I ran both these imports from the 18c client on my DBaaS service, but remember, it is initiating the import process on the ADW database.

Get the Log File

If we want to read the contents of the impdp log file we can push it across to the object store using the PUT_OBJECT command.

It can then be downloaded from the object store.

Autonomous Transaction Processing (ATP)

The method for importing data from a dump file in an object store is the same for the Autonomous Transaction Processing (ATP) service as the Autonomous Data Warehouse (ADW) service.

Remember you are able to create a variety of access structures in ATP that you can’t in ADW.

Writing our experiences with AWS, Oracle, PostgreSQL

Oracle Goldengate supports Oracle to PostgreSQL migrations by supporting PostgreSQL as a target database, though reverse migration i.e PostgreSQL to Oracle is not supported. One of the key aspect of these database migrations is initial data load phase where full tables data have to copied to the target datastore. This can be a time consuming activity with time taken to load varying based on the table sizes. Oracle suggests to use multiple Goldengate processes to improve the database load performance or to use native database utilities to perform faster bulk-loads.

To use a database bulk-load utility, you use an initial-load Extract to extract source records from the source tables and write them to an extract file in external ASCII format. The file can be read by Oracle’s SQL*Loader, Microsoft’s BCP, DTS, or SQL Server Integration Services (SSIS) utility, or IBM’s Load Utility ( LOADUTIL ).

Goldengate for PostgreSQL doesn’t provide native file loader support like bcp for MS SQL and sqlloader for Oracle. As an alternative, we can use FORMATASCII option to write data into csv files (or any custom delimiter) and then load them using PostgreSQL copy command .This approach is not automated approach and you will have to ensure that all files are loaded into target database.

In this post, we will evaluate 2 approaches i.e using Multiple replicat Processes and using ASCII dump files with PostgreSQL copy command to load data and compare their performance. Below diagram shows both the approaches

How to Extract and Load Data From an Oracle Database Ref: -

To compare the scenarios, I created a test table with 200M rows(12GB) and used a RDS PostgreSQL instance (db.r3.4xlarge with 10k PIOPS)

Approach 1 : Using Oracle Goldengate multiple replicat processes to load data

In this approach, I used multiple Oracle Goldengate Replicat processes (8) using @range filter to load data into PostgreSQL.

We were able to get 5k inserts/sec per thread and were able to load the table in

88 mins with 8 replicat processes.
One key point to remember is that if you are working with EC2 and RDS databases, you should have EC2 machine hosting trail files and RDS instance in same AZ. During the testing, we noticed that insert rate dropped drastically (

800 insert per sec) when using cross AZ writes. Below is replicat parameter file used for performing data load.

You will need to create additional replicat process files by making change to the range clause.e.g FILTER (@RANGE (2,8)), FILTER (@RANGE (3,8)), etc.

Approach 2: Data load using PostgreSQL copy command

In second approach, we used parameter file with FORMATASCII option(refer to below snippet) for creating a Goldengate Extract process which dumped the data with ‘|’ delimiter and then used PostgreSQL copy command to load data from these dump files.

Extract Parameter file

With above parameter file, Goldengate Extract process would send data to remote system and store the data in dump files. These files are then loaded into PostgreSQL using copy command.

Data load took 21 mins, which is nearly 4x faster than initial approach. If you remove the Primary key index, then it drops the time taken to

How to Extract and Load Data From an Oracle Database

How to Extract and Load Data From an Oracle Database

Ever thought of creating a webservice your app can consume in no time? Then look no further as Spring boot offers an easy way to create stand-alone, production-grade Spring based Applications in no time. The setup is pretty straight forward often needing less time to get it up and running.

Spring integrates a wide range of different modules in its umbrella, as spring-core, spring-data, spring-web-MVC, e.t.c.¹

The spring web-MVC leverages on the power of Dependency Injection to develop loosely coupled applications which when done properly could lead to effective and efficient applications

In this article, I’ll show you how to create a RESTful webservice with Spring Boot, Spring Data JPA and the data will be persisted in a local Oracle database. We’ll set up the project by using the following:

· Spring boot starter web

· Spring boot starter data JPA

· Oracle database 11g express

· Oracle JDBC driver ojdbc.jar

1. Download Oracle Setup

First download the oracle 11g express database from oracle.com and install it on your local machine. You need to register if you don’t already have an oracle account before you can download the database. Extract the zip file and run the setup file. Follow the installation steps and do well to remember your password as you’ll need it to configure the data source. The default username is “system”.

2. Create a New Project in Intellij IDEA

Create a new project using spring initialzr from IntelliJ and add the following dependencies:

spring web and spring data jpa.

I assume you’ve installed maven, if not go here to set up maven.

3. Install Oracle Driver

The ojdbc.jar provides the necessary drivers and setup for oracle database.

To add the ojdbc driver to your local repository, download the ojdbc7.jar file from oracle.com.

Copy the jar file to any folder, preferably in your C: folder.

Run the following maven command:

For windows users, replace the forward slash with a backslash.

Add the following dependency to the pom.xml file:

Your project’s pom.xml should look like this:

4. Create The Project Structure

Since we are using the MVC architectural pattern which separates the application into model, view and controller, we need to create different packages for controllers, entities(or models), daos (Data Access object), and services. The project structure looks like this:

How to Extract and Load Data From an Oracle Database

5. Configure Oracle Datasource on IntelliJ IDEA

On the database tool window, add a new Oracle datasource. Download the missing drivers if you’ve not done so.

Input the username(default is “system”) and password for your local oracle db and test connection.

It should be successful all things being equal. Click Ok!.

6. Add Application Properties

The application.properties file enables spring to know the database configuration and profile to use at runtime.

Navigate to your src/main/resources and add the following to the application.properties file.

7. Create the User Model

Next, we’ll create a User class annoted with the JPA @Entity annotation. This class creates a model which JPA uses to establish a relationship/mapping between the User entity and tables in the Oracle relational DB.

Navigate to the entity package and create a new class called User and add the following:

Next, we create a data access object which is an interface that extends JpaRepository interface. The DAO is used to perform CRUD database operations

Navigate to the dao package and create an interface called UserDao and add the following code:

The service class contains all methods that handle the business logic of the application. Navigate to the service package and create a new java class called UserService . For brevity sake, I’ll only create a service to add a new user and retrieve all users.

Next, we create the User Controller which holds all the REST endpoints. Navigate to the controller package and create a new class called UserController and add the following:

Now the application is ready, open terminal on IntelliJ and run the following maven command:

This creates the User table in the database using the JPA annotations and the ojdbc configurations in the application.properties file. On the terminal logs, you should also see sql syntax on how the tables were created.

We’re going to test the REST endpoints from the UserController using Postman.

On Postman, you can perform a POST request to create a new user on the

Here we POST a User object that has a name and salary. Here’s a screenshot of the POST request and the corresponding Json result:

How to Extract and Load Data From an Oracle Database

Also, you can get all users on persisted on the database by performing a get request on the /user/all endpoint. The sample GET request is shown below:

How to Extract and Load Data From an Oracle Database

Voila! So that’s basically how you can set up a webservice using spring boot. Thanks for reading.

Don’t forget to drop your questions, contributions, and thoughts in the comment section below.

You can clone the project on github

Ikhiloya/spring-jpa-oracle-demo

spring-jpa-oracle-demo – A sample demo app that shows how to create a RESTful webservice using Spring Boot, Spring Data…

github.com

13. Useful Resources

NOTE : You can contribute your ideas to this project for Spring boot beginners.

Ideas to consider:

1. spring boot with mysql or any other database
2. spring boot with kotlin
3. spring security
4. JUnit Test
5. Integration Test
6. Your own idea would be great…just do it!

I have an Oracle PL/SQL procedure that extracts data into an excel worksheet. I am wondering if there is anyway from the procedure to also set the macros of the excel document that is created?

For example, usually the macros are manually done:
Go to Tools, Macros – Macro. Select the “Automate” macro and click “run”.

However I am trying to automate the manual process. Any suggestions or direction?

I have an Oracle PL/SQL procedure that extracts data into an excel worksheet. I am wondering if there is anyway from the procedure to also set the macros of the excel document that is created?

For example, usually the macros are manually done:
Go to Tools, Macros – Macro. Select the “Automate” macro and click “run”.

However I am trying to automate the manual process. Any suggestions or direction?

I have an Oracle PL/SQL procedure that extracts data into an excel worksheet. I am wondering if there is anyway from the procedure to also set the macros of the excel document that is created?

For example, usually the macros are manually done:
Go to Tools, Macros – Macro. Select the “Automate” macro and click “run”.

However I am trying to automate the manual process. Any suggestions or direction?

Setting a macro would not be possible from PLSQL.
Could you please tell what is that macro going to do?

One suggestion would be to connect to oracle database from Excel using macro and fetch the data in to excel sheet.

This Oracle Database 11g: Data Warehousing Fundamentals training will teach you about the basic concepts of a data warehouse. Explore the issues involved in planning, designing, building, populating and maintaining a successful data warehouse.

  • Define the terminology and explain basic concepts of data warehousing.
  • Identify the technology and some of the tools from Oracle to implement a successful data warehouse.
  • Describe methods and tools for extracting, transforming and loading data.
  • Identify some of the tools for accessing and analyzing warehouse data.
  • Describe the benefits of partitioning, parallel operations, materialized views and query rewrite in a data warehouse.
  • Explain the implementation and organizational issues surrounding a data warehouse project.
  • Improve performance or manageability in a data warehouse using various Oracle Database features.

Oracle’s Database Partitioning Architecture

You’ll also explore the basics of Oracle’s Database partitioning architecture, identifying the benefits of partitioning. Review the benefits of parallel operations to reduce response time for data-intensive operations. Learn how to extract, transform and load data (ETL) into an Oracle database warehouse.

Improve Data Warehouse Performance

Learn the benefits of using Oracle’s materialized views to improve the data warehouse performance. Instructors will give a high-level overview of how query rewrites can improve a query’s performance. Explore OLAP and Data Mining and identify some data warehouse implementations considerations.

Use Data Warehousing Tools

During this training, you’ll briefly use some of the available data warehousing tools. These tools include Oracle Warehouse Builder, Analytic Workspace Manager and Oracle Application Express.

Dani Schnider’s Blog

How to Extract and Load Data From an Oracle Database

If you work with Data Vault for a data warehouse running in an Oracle database, I strongly recommend to use Oracle 12.2 or higher. Why that? Since Oracle 12c Release 2, join elimination works for more than one join column. This is essential for queries on a Data Vault schema.

I often have discussions with customers or training attendees that have concerns about query performance on Data Vault schemas. Data Vault is a suitable data modeling method for integration and historization of data from different source systems in a data warehouse. But because a Data Vault schema typically contains a high number of tables, a lot of joins are required to select data from all the Hubs, Links and Satellites that are involved in each query. A little-known performance feature introduced with Oracle 12.2 helps to improve the query performance.

In a DOAG presentation this year, we have shown several possibilities how data can be loaded from a Data Vault schema into a Star Schema. In one live demo, I explained the purpose of join elimination in this context. Since Oracle 12.2, multi-column join elimination is supported, as Jonathan Lewis wrote 3 years ago in his blog post Join Elimination 12.2. The benefit of this feature for Data Vault I want to explain in this blog post in detail.

In Data Vault, we typically have multiple Satellite tables attached to a Hub table. A separate Satellite may be created for each source system, for different change frequencies, for additional columns due to new requirements and so on. So, it is common to have many Satellites for one Hub. This has advantages for independent load jobs, data model enhancements, etc., but makes it more difficult to extract data from the Data Vault schema. To reduce the complexity of the extraction jobs or ad-hoc queries, a proven approach is to create (or better: generate) a view layer on top of the Data Vault model. For example, we can create two views for each Hub and its corresponding Satellites:

  • The Current View returns the current version of all Satellites for each Hub key
  • The Version View (or History View) returns the whole history of the data, with a separate version for each validity period

For good query performance, a common approach in Data Vault is to create a “Point in Time table” (PIT table) for each Hub. I already explained in blog post Loading Dimensions from a Data Vault Model how the PIT table and the view layer can be implemented. A similar approach I used for my demo example at the DOAG presentation.

Let’s assume we have a Hub H_CUSTOMER with three Satellite tables S_CUSTOMER_INFO, S_CUSTOMER_ADDRESS and S_BILLING_ADDRESS. The PIT table PIT_CUSTOMER contains the load date for each Satellite version that is valid in a particular validity range. Details are explained here. On top of these tables, we create a Current View and a Version View.

How to Extract and Load Data From an Oracle Database

The two views look almost the same, the only difference is that the Current View contains a filter WHERE pit.load_end_date IS NULL to return only the current versions. The Version View does not apply this filter, but on the other hand contains two additional columns VALID_FROM and VALID_TO to describe the validity range. To show the behaviour of join elimination, I will use the Current View, but it works exactly the same with the Version View.

The Current View for our example contains all attributes of all Satelllites and is created like this:

First, extract the oracle data into sequential files.

Second, decide which indexes are needed for the vsam files.

Next, use idcams to define the new vsam files and required indexes.

Then load the extracted data into the new vsam files.

If this is not what you are looking for, please clarify.

Back to top Madhu MF
Currently Banned
New User

Joined: 11 Mar 2008
Posts: 4
Location: Bangalore

Hi ,
How to Extract Oracle data into Seq file? Yoy are correct but i want to know the utilities and is there any other way to Load the data give me some small example i am .

Could you kindly advise. We need this urgently for Sreenadh to start The work.
Appreciate all your help.

Thanks Madhu

Back to top Madhu MF
Currently Banned
New User

Joined: 11 Mar 2008
Posts: 4
Location: Bangalore

It very Urgent
Back to top Gnanas N

Active Member

Joined: 06 Sep 2007
Posts: 792
Location: Chennai, India

Unload the table to text file and then load it to Mainframes.

When I googled with for your requirement, I got many utilities.

How to Extract and Load Data From an Oracle Database

Extract Data from Incoming Emails and Convert It to Database

If you’re doing business online, you probably maintain a database of your customers, clients, or subscribers. Typically you store the customers’ email addresses, names, order numbers, and purchased products in the database. Plus, you may want to keep the customer’s personal information such as postal addresses, phone numbers, fax numbers and much more in your database. It’s a large volume of data that you need to keep in order — add new customers, remove customers, and update the customer’ information on their request.

Have you ever thought how much time you spend on the tasks such as extracting data from emails and pasting it into your database, manually? It can literally take hours. And as far as your online business is growing, you’ll be receiving more and more emails. One day you may find out that it eats up almost all your working time to extract form data from email to Excel, MS Access or any other database.

Parse Any Email to Database with G-Lock Email Processor

So, why not automate your email processing and convert emails to the database records in minutes? G-Lock Email Processor will extract data from email forms — customer’s name, email address, postal address, order ID, product name they purchased, purchase date, license type, or whatever you tell it to extract — and add it to your database. It can be a local or remote ODBC compatible database such as MS Access, MySQL, MS SQL, Oracle and others.

How to Extract and Load Data From an Oracle Database

It’s easy to get started with G-Lock Email Processor. You just create an email account within the program from which it will read your emails, add the filter to catch the emails you need, for example, purchase orders, completed web forms, unsubscribe requests, and tell the program how to extract data from email and what it must do with the extracted data — write the data to the database, update existing records, or delete records based on the extracted data.

How to Extract and Load Data From an Oracle Database

Once set up, the extraction rule will boil down the email to the database you are looking for.

For example, each email campaign may generate you a number of unsubscribe requests. You don’t want to waste your time on those recipients but you can’t ignore them either. So, you setup G-Lock Email Processor to catch unsubscribe requests by, for example, the word “Unsubscribe” in the Subject, extract the recipient’s email address, check your database if such email address exists and remove the record associated with this email from the database. While the program is doing the job for you, you can focus on more important aspects of your business, or just have a rest you deserved.

The greatest thing is that you don’t even need to start G-Lock Email Processor manually. It runs as a service and automatically starts when you turn your computer to On. You can even make the program work for you 24 hours/7 days a week and have your work done in a timely and accurate manner without spending a single minute of your working time for it.

G-Lock Email Processor is a flexible data parser and extractor for converting incoming emails to easy to handle databases.

Try it for free and parse your first emails to the database within minutes!

In preparing my presentation for the Michigan Oak Table Symposium, I came across AWR extract and load. While these statements are documented in the Oracle manuals (kind of), I don’t see much discussion online, which is a good barometer for the popularity of an item. Not much discussion – not very well known or used. Although the scripts are in $ORACLE_HOME/rdbms/admin in 10.2, they are not documented.

One of the frustrations with AWR (and Statspack) has been that diagnosing past performance issues and trending are dependent on keeping a large number of snapshots online. This means more production storage, resource consumption with queries. Would it not be nice to be able to take the AWR data from your production system, load it into a dba repository and then do all your querying? Perhaps even your own custom ETL to pre-aggregate, create custom views and procedures?

As of 10.2, Oracle supplies two scripts that enable you to extract and load AWR data into another database (even one already running AWR snapshots). You can even take the AWR data from a 10.2 database on Solaris and load it into an 11.2 database on Windows XP (other variations may work…but these are the two versions I have handy). I also took 11.2 database on Linux and loaded it to the Windows database.

Extracting AWR data is pretty straightforward. Login as a dba or appropriately privileged account and run $ORACLE_HOME/rdbms/admin/awrextr.sql .

The Data Pump export took 10 – 20 minutes to extract 7 days of AWR data. The files were less than 50 megs and were able to be compressed to less than 1/2 that size. FTP (or scp) the file to the DBA repository server and uncompress it. Make certain that the dump file is stored in a directory that is also defined as an Oracle directory.

The AWR load was also fairly straightforward, with one minor wrinkle with the dump file name.

The process will then prompt for the staging schema name, the default is AWR_STAGE . If you accept the default, the script will create the AWR_STAGE user after asking you for default tablespaces. Once it has completed the awr load process, the script will drop the AWR_STAGE user.

After the process completes, the AWR tables now have new data in them! You can query DBA_HIST_SNAPSHOT or any of the other DBA_HIST views (including DBA_HIST_ACTIVE_SESS_HISTORY ). Unfortunately, the standard AWR and ASH reports use the current database dbid, which won’t be the same as the dbid of the data you have just loaded. You will need to create your own AWR/ASH scripts or modify the ones provided…but that is for the next blog post!

BODS Data Migration ETL Process :

Extract, Transform & Load (ETL) process view presented below for better understanding.

Below diagram shows how the ETL used in stages loading data in SAP client using one of the identified interfaces.

How to Extract and Load Data From an Oracle Database

Figure 1.0 ETL Process per Object

a. Data Extraction & Cleansing

  • Connect to different sources identified
  • Extract & cleansed data using different transformations (Data correction, Data Quality etc)
  • Load data in Staging DB which is an Oracle database table

b. Data Transformation

  • Connect to Extracted & cleansed source, which is in oracle database Staging DB
  • Do required data load transformation – and load data in oracle table on staging DB
  • Get data from oracle table in staging DB & call the appropriate load (BAPI/RFC) to load data in

How to Extract and Load Data From an Oracle Database

Figure 2.0 ETL Job Flow

Each Data migration job (ETL job) consists of component steps as explained bellow:

  1. Extract data from source to staging area (Oracle DB)
  2. Validate the data (as defined in Technical Design Document (TDD)) by applying variety of transforms and place in staging area (Oracle DB)
  3. Apply mappings and format structure (to the validated data) to match SAP interface (BAPI/RFC) and place in the staging area (Oracle DB)
  4. Load (validated, mapped and formatted) data using SAP interface (BAPI/RFC) which loads data further to SAP

SQLines Data is an open source, scalable, high performance data transfer, schema conversion and validation tool for Oracle to MariaDB migration.

Why SQLines Data

SQLines Data benefits:

Migration Features

You can use SQLines SQL Converter tool to convert stored procedures, functions, triggers, views and other objects.

Scalability and High-Performance

Designed for DBAs and Enterprise-Class Migrations

Logging and Statistics

SQLines Data in Command Line

You can use SQLines Data tool in command line. Just launch sqldata.exe on Windows or ./sqldata on Linux with the specified options.

For information how to set up Oracle and MariaDB connections, see SQLines Data Connection String Formats.

sqldata -t=emp -sd=oracle, user/[email protected]/sid -td=mariadb, user/[email protected],db_name

-t option defines the table name, -sd and -td options (source and target databases) specify the connection strings for Oracle and MariaDB, respectively.

This command transfers table emp from Oracle database to MariaDB db_name database located on host.

Troubleshooting SQLines Data

Troubleshooting SQLines Data for MariaDB to Oracle migration:

SQLines Data Logs

There are two main sources that can help you troubleshoot SQLines Data:

sqldata.log file contains the detailed information about MariaDB to Oracle migration process.

By default, sqldata.log file is located in the current working directory. You can use -log command line option to change its location and file name.

sqldata_ddl.sql file contains information about all DDL statements executed in MariaDB during migration.

By default, sqldata_ddl.sql file is located in the current working directory. You can use -out command line option to change its location.

You can enable trace by specifying -trace=yes in the sqldata.cfg file. The trace file can be helpful for SQLines Data developers to resolve crashes or specific data issues.

Data Transfer – The used command is not allowed with this MariaDB version

Sometimes you can receive The used command is not allowed with this MariaDB version during the data transfer:

The tool uses in-memory LOAD DATA LOCAL INFILE command, and possible reason is that it is not allowed by your MariaDB server configuration.

Edit my.cnf (or my.ini on Windows) and set local-infile=1 in [mysqld] section:

You have to restart the MariaDB server for the change to take effect.

Data Transfer – ORA-01406: fetched column value was truncated

During data export from Oracle you can face “ORA-01406: fetched column value was truncated” error. Most likely reason is that the length in bytes of CHAR or VARCHAR2 column stored in the Oracle database is smaller than the length of the column after the conversion at the client side for loading to MariaDB.

For example, if you set -char_length_ratio=1.5, the maximum length of all CHAR and VARCHAR columns will be increased by 2x, so CHAR(10) in Oracle becomes CHAR(15) in MariaDB.

In this article, we show how to insert data into a database from an HTML form in Django.

If you use a ModelForm in Django, you wouldn’t need to worry about inserting data into the database, because this would be done automatically. This is the case with Django ModelForms.

However, there are times where you may not want to use Django ModelForms and instead just want to code the form directly in HTML and then insert the data that the user has entered into the database. This is what this article addresses. We will show how to insert the data that a user enters into an HTML form into the database.

So let’s create a form in which a user creates a post. This post form will simply take in 2 values, the title of the post and the content of the post.

We will then insert this into the database.

So the first thing we have to do is create our database (the model) in the models.py file.

models.py File

So the first thing we have to do is create our database.

We will call our database, Post.

It will only have two fields: title and content.

Okay, this is a very basic database. We simply have a title and a content field.

After this, we save the file and then, within the command line, run the command, py manage.py makemigrations, and then run the command, py manage.py migrate.

createpost.html Template File

Now we’ll create our template file. We’ll call it, createpost.html

Within this template file, we are going to have the form where a user can submit a post.

It is a simple form that only contains 2 fields: title and content.

This is shown in the code below.

So this template contains a very basic form that has 2 fields: one which is title and the other which is content.

We need a name attribute with each form field because this is how we will extract the data that the user enters into the field.

views.py File

Lastly, we have our views.py file.

In this file, we will take the data that the user has entered into the form fields and insert the data into a database.

The following code in the views.py file does this.

So this is the heart of our code in which we extract the data from the form fields that the user has entered and insert the data into a database.

The first thing we want to do is to make sure that the user has clicked the ‘Post’ button on the template file. We check this with, if request.method == ‘POST’:

We want to make sure that the fields aren’t blank. So we use the if statement, if request.POST.get(‘title’) and request.POST.get(‘content’):, to make sure both fields are filled in.

After this, we create a variable named post and set it equal to Post()

This sets the variable equal to the Post database. Remember that you must import the database at the top of the views.py page.

Now in order to insert data into the database use the database name followed with a ., and then the form field- we then set this equal to request.POST.get(‘attribute’)

To save data to the title database field, we use the statement, post.title, and set this equal to request.POST.get(‘title’). This takes the title database field and sets it equal to whatever data the user entered into the title form field.

To save data to the content database field, we use the statement, post.content, and set this equal to request.POST.get(‘content’). THis takes the content database field and sets it equal to whatever data the user entered into the content form field.

We then save the data. Without the statement, post.save(), the data isn’t saved.

You then would return any template file that you want to.

And this is how we can insert data into a database from an HTML form in Django.

What you will learn
This course will help you understand the basic concepts of administering a data warehouse. You’ll learn to use various
Oracle Database features to improve performance and manageability in a data warehouse.

Learn To:
Implement partitioning.
Use parallel operations to reduce response time.
Extract, transform and load data.
Create, use and refresh materialized views to improve data warehouse performance.
Use query rewrite to quickly answer business queries using materialized views.
Use SQL access advisor and PL/SQL procedures to tune materialized views for fast refresh and query rewrites.
Identify the benefits of partitioning, in addition to using parallel operations to reduce response time for data-intensive
operations.

Benefits To You
Expert instructors will also teach you how to extract, transform and load data into an Oracle database warehouse.
Discover how to use SQL Access Advisor to optimize the entire workload. Use materialized views to improve data
warehouse performance and learn how query rewrites can improve performance.

Audience
Application Developers
Data Warehouse Administrator
Data Warehouse Developer
Database Administrators
Support Engineer
Technical Consultant

Related Training
Required Prerequisites
Ability to read and understand execution plans
Good working knowledge of SQL and in data warehouse design and implementation
Data Warehouse design, implementation, and maintenance experience

Course Objectives
Use parallel operations to reduce response time for data-intensive operations
Extract, Transform, and Load data in the data warehouse
Create, use, and refresh materialized views to improve the data warehouse performance
Use Query rewrite to quickly answer business queries using materialized views
Use SQL Access Advisor and PL/SQL procedures to tune materialized views for fast refresh and query rewrite
Use the features of compression and resumable sessions
Review the basic Oracle data warehousing concepts

Course Outline

Development Tools
Oracle SQL Developer
Enterprise Manager
Sample Schemas used

Characteristics of a Data Warehouse
Comparing OLTP and Data Warehouses
Data Warehouse Architectures
Data Warehouse Design
Data Warehouse objects
Data Warehouse Schemas

Optimizing Star Queries
Introducing Bitmap Join Indexes
Understanding Star Query Optimization and Bitmap Joined Index Optimization

Partitioned Tables and Indexes
Partitioning Methods
Partitioning Types
Partition Pruning and Star queries

Operations That Can Be Parallelized
How Parallel Execution Works
Degree of Parallelism
Parallel execution plan
Automatice Parallelism

Parallel Query
Parallel DDL
Parallel DML
Tuning Parameters for Parallel Execution
Balancing the Workload

Extraction Methods
Capturing Data With Change Data Capture
Sources and Modes of Change Data Capture
Publish and Subscribe Model: The Publisher and the Subscriber
Synchronous and Asynchronous CDC
Asynchronous AutoLog Mode and Asynchronous HotLog Mode
Transportation in a Data Warehouse
Transportable Tablespaces

Loading Mechanisms
Applications of External Tables
Defining external tables with SQL*Loader
Populating external tables with Data Pump
Other Loading Methods

Data transformation
Transformation Mechanisms
Transformation Using SQL
Table Functions
DML error logging

The Need for Summary Management
Types of Materialized Views
Using Materialized Views for Summary Management
Materialized View Dictionary views

Refresh Options
Refresh Modes
Conditions That Effect Possibility of Fast Refresh
Materialized View Logs
Partition Change Tracking (PCT) Refresh
Refresh Performance Improvements

What Are Dimensions
Creating Dimensions and Hierarchies
Dimensions and Privileges
Dimension Restrictions
Verifying Relationships in a Dimension
Dimension Invalidation

Query Rewrite: Overview
What Can be Rewritten
Conditions Required for Oracle to Rewrite a Query
Query Rewrite guidelines
Setting Initialization Parameters for Query Rewrite
Query Rewrite Methods
Partition Change Tracking (PCT) and Query Rewrite
Query Rewrite Enhancement to Support Queries Containing Inline Views

SQL Access Advisor: Usage Model
Setting Initial Options
Specifying the Workload Source
Recommendation Options
Schedule and Review
PL/SQL Procedure Flow
Tuning Materialized Views for Fast Refresh and Query Rewrite
Table Compression and Resumable Sessions