How to extract and load data from an oracle database
Mason Cooper
Updated on March 29, 2026
Microsoft’s Business Analytics Service Power BI, enables us to connect to hundreds of data sources and produce beautiful reports that can be consumed on the web and across mobile devices, in order to deliver insights throughout our entire organization. So,in this post, I will walk you through the process to get data in Power BI from Oracle Database.
When you open Power BI Desktop, you will see the following window:
PowerBI welcome page
By clicking on Get Data (located at the top left of the window), the following window will appear. As we want to obtain the data from an Oracle Database, the only thing we have to do is to mark that Oracle Database option and click on the Connect button.
How to connect Power BI to Oracle Database
After click on Connect button, the following window will appear:
Connect to Oracle database
In the Server box, we will write the exactly the following text:
(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=XE)))
These instructions are written in the tnsnames.ora file, which is located in: C:oraclexeapporacleproduct11.2.0servernetworkADMIN
Note: if you do not have exactly that code, change it to get it in the same way!
Once we have copied and pasted the text before, we have to ensure that the Import option is selected because we want to import directly the data from an Oracle database. Once we have all this right, we click on OK button.
Oracle Database XE
After that, the following window will appear. Here we have to write our credentials and these credentials correspond to the user we created in the Oracle Database Express Edition 11g program:
Oracle Database XE login
These are the data we introduced in:
– Database Username: EXAMPLE_USER
– Application Express Username: TEST
– Password: test
– Confirm Password: test
Oracle db in PowerBI
So here, in our case, we have to introduce the following credentials:
– Username: EXAMPLE_USER
– Password: test
Navigate Oracle database
Once you click on the Connect button, you will arrive at the following window. Therefore, you will be able to see the objects you have in your Oracle database:
If you click on your schema (in our case, EXAMPLE_USER), we will see the same databases we have in Oracle.
Databases in PowerBI:
Oracle Databases Navigator
Load tables in PowerBI
If we click on some of them, we will able to load them in Power BI Desktop, as follows:
After a few minutes of loading, the tables selected will appear:
Tables loaded in PowerBI
Now, in the right part of the screen, we can see the loaded tables with all their data. In conclusion, this is the way to connect Power BI from Oracle Database.
If you want to improve your knowledge in Power BI, you can check out our course. ¡Feel free to enroll in it and enjoy learning!
Stay tuned for more news on our blog and subscribe to our newsletter if you want to receive our new posts in your mail, get course discounts… 🙂
I am working at SolidQ as Data Platform Specialist in BI projects since April 2017.
During my training I have always been focused to data, doing courses about SSIS, SSAS, SSRS and how to use Power BI Desktop, Management Studio, Visual Studio… delving into Power BI Desktop. Nowadays, I am working with customers with these tools applying all the best practices I have learnt.
- 9445 Просмотров
- Метки: нет (добавить)
1. Re: How to extract Oracle data into Excel file?
- Мне нравится Показать отметки “Мне нравится” (0) (0)
- Действия
2. Re: How to extract Oracle data into Excel file?
The “extract data from a table” part is easy, you could do that with VB/ADO, or .NET/ODP.NET. It’s then a matter of taking that data and appending it to a spreadsheet that might be the hard part, and how you’d do that exactly is really more of a Microsoft question than an Oracle one.
If you want to be able to do this from the database itself and your database is on Windows, you could use either [.NET Stored Procedures| if you can manipulate the spreadsheet in .net code, or you could also use Oracle’s [COM Automation Feature| if you’re handy with the COM object model for Excel.
How you’d do that exactly via either .net or com or vb is the crux of the problem and is something you’d need to know before it turns into an Oracle question, but if you already know how to do that and now just need to figure out a way to do that from Oracle, either of the above might help.
Devart Excel Add-in for Oracle allows you to connect Excel to Oracle databases, retrieve and load live Oracle data to Excel, and then modify these data and save changes back to Oracle. Here is how you can connect Excel to Oracle and load Oracle data to Excel in few simple steps.
To start linking Excel to Oracle, on the ribbon, click the DEVART tab and then click the Get Data button. This will display the Import Data wizard, where you need to create Excel Oracle connection and configure query for getting data from Oracle to Excel:
1. Specify Connection Parameters
To connect Excel to Oracle database, you need to enter the necessary connection parameters in the Connection Editor dialog box. Two connection modes can be used in Excel Add-in for Oracle connections. The Direct connection mode allows connecting Oracle to Excel without any additional software. The OCI connection mode requires Oracle Client installed. The required connection parameters are different for different connection modes.
Direct Connection Mode
The following parameters are used for connecting Excel to Oracle database the Direct connection mode
- Host – the DNS name or IP address of the Oracle server to which to connect. It also can accept a TNS descriptor or specify a secure protocol to use and optionally, a port after a colon.
- SID – the unique name for an Oracle database instance.
- Port – the number of a port to communicate with listener on the server. The default value is 1521.
- User Id – your Oracle user name.
- Password – your Oracle password.
- Database – the name of SQL database to connect to Excel.
- Connect as – allows opening a session with administrative privileges.
Direct mode also supports secure SSH and SSL connections. To enable use of SSH or SSL, you need to add the corresponding prefix to the Host parameter – ssh:// for the SSH protocol and tcps:// for SSL. Then you need to specify connection string parameters for the corresponding protocol in the Advanced parameters.
OCI Connection Mode
To use the OCI connection mode for connection to an Oracle database, you should have Oracle Client software installed on your PC. Clear the Direct check box to work with Oracle Client.
In this mode the SID and Port settings are not used, and you need to specify the Oracle Home to use instead. Besides, in the Client mode, the Host parameter must specify the name of TNS alias of Oracle database to which to connect instead of the IP address or DNS name of the server. Specify the Oracle Client you want to be used in the Home connection option.
Advanced Connection Parameters
If you need to configure your Excel Oracle connector in more details, you can optionally click the Advanced button and configure advanced connection parameters. There you can configure secure SSH and SSL connections for the Direct made, fixed char data types trimming, Oracle proxy authentication (for the OCI mode only), Unicode settings, etc.
To check whether you have connected Excel to Oracle correctly, click the Test Connection button.
2. Select whether to Store Connection in Excel Workbook
You may optionally change settings how the connection and query data are stored in the Excel workbook and in Excel settings:
- Allow saving add-in specific data in Excel worksheet – clear this check box in case you don’t want to save any Excel add-in specific data in the Excel worksheet – connections, queries, etc. In this case, if you want to reload data from Oracle to Excel or save modified data back to Oracle, you will need to reenter both the connection settings and query.
- Allow saving connection string in Excel worksheet – clear this check box if you want your Oracle connection parameters not to be stored in the Excel. In this case you will need to reenter your connection settings each time you want to reload Oracle data or modify and save them to Oracle. However, you may share the Excel workbook, and nobody will be able to get any connection details from it.
- Allow saving password – it is recommended to clear this check box. If you don’t clear this check box, all the connection settings, including your Oracle password, will be stored in the Excel workbook. And anyone having our Excel Add-in for SQL Server and the workbook will be able to link Excel to the Oracle, get data from it, and modify them. But in this case you won’t need to reenter anything when reloading data from Oracle to Excel or saving them to Oracle.
- Allow reuse connection in Excel – select this check box if you want to save this connection on your computer and reuse it in other Excel workbooks. It does not affect saving connection parameters in the workbook itself. You need to specify the connection name, and after this you will be able to simply select this connection from the list
3. Configure Query to Get Data
To import data from Oracle to Excel, you may either use Visual Query Builder to configure a query visually, or switch to the SQL Query tab and type the SQL Query. To configure query visually, do the following:
In the Object list select the Oracle table to load its data to Excel.
In the tree below clear check boxes for the columns you don’t want to import data from.
Optionally expand the relation node and select check boxes for the columns from the tables referenced by the current table’s foreign keys to add them to the query.
In the box on the right you may optionally configure the filter conditions and ordering of the imported data and specify the max number of rows to load from Oracle to Excel. For more information on configuring the query you may refer to our documentation, installed with the Excel Add-ins.
After specifying the query, you may optionally click Next and preview some of the first returned rows. Or click Finish and start data loading.
After the data is loaded from Oracle to Excel spreadsheet, you can work with these data like with usual Excel worksheet. You can instantly refresh data from Oracle by clicking Refresh on the Devart tab of the ribbon, and thus, always have fresh live data from Oracle in your workbook.
If you want to edit Oracle data in Excel and save changes made in Excel to Oracle database, you need to click Edit Mode on the Devart tab of the ribbon first. Otherwise, the changes you make cannot be saved to Oracle.
After you start the Edit mode, you can edit the data as you usually do it in excel – delete rows, modify their cell values. Columns that cannot be edited in Oracle, will have Italic font, and you cannot edit values in these columns. To add a new row, enter the required values to the last row of the table that is highlighted with green.
To apply the changes to actual data in the database, click the Commit button. Or click Rollback to rollback all the changes. Please note that the changes are not saved to the database till you click Commit, even if you save the workbook.
Once you have created a connection to an Oracle database, you can select data and load it into a Qlik Sense app or a QlikView document. In Qlik Sense , you load data through the Add data dialog or the Data load editor . In QlikView , you load data through the Edit Script dialog.
The Oracle Connector supports Direct Discovery . The SELECT statement can be edited in the Data load editor and the Edit Script dialog to create a DIRECT QUERY statement.
Qlik Sense : Oracle database properties
| Properties | Description |
|---|---|
| Owner | Shows the owners of tables in the database. |
| Tables | |
| Data preview | |
| Metadata | Shows a table of the fields and whether they are primary keys. Primary-key fields are also labeled with a key icon В® beside the field name. |
| Fields | |
| Filter data | |
| Filter fields | Displays a field where you can filter on field names |
| Hide script / Preview script | Shows or hides the load script that is automatically generated when table and field selections are made. |
| Include LOAD statement | |
| Insert script | Inserts the script displayed in the Data preview panel into the script editor. This option is available only when you use the Data load editor . |
QlikView : Oracle database properties
| Properties | Description |
|---|---|
| Owner | Shows the owners of tables in the database. |
| Objects | |
| Script | Shows the script for the current selection. |
| Preview | |
| Preceding load | |
| Sign In· View Thread |
| Last Visit: 10-Aug-20 3:29 Last Update: 10-Aug-20 3:29 | Refresh | 1 |
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
Our enterprise plans to move our on-premises databases to the cloud. What’s the best method to migrate data from Oracle Database into an AWS EC2 instance?
Because few enterprises perform public cloud deployments from scratch, chances are high they have Oracle Database.
Continue Reading This Article
Enjoy this article as well as all of our content, including E-Guides, news, tips and more.
running on premises. And moving those on-premises databases to cloud has its benefits. But an Oracle Database migration to AWS depends on a variety of factors.
Before undergoing an Oracle Database migration, IT teams must consider the size of the database, available network bandwidth between the local data center and AWS and which software edition is running in the data center. Business needs, such as the amount of time available to perform the move, are also important factors.
An Oracle Database migration could be accomplished in one step, but this requires a complete shutdown of the local database in order to extract and migrate the data to the new database in AWS. The process can take anywhere from one to three days, so this can be the most obtrusive migration strategy. Single-step database migrations are generally preferred for small businesses with limited database sizes that can tolerate prolonged database downtime during the migration.
Two-step migration strategies are common. The first step produces a point-in-time copy of the existing database, which can be moved to AWS without imposing any downtime on the local database. The local database continues to run during this process, so the actual migration can take as long as necessary — there is almost no tangible disruption.
After the initial Oracle Database migration, a second step will capture, migrate and synchronize any incremental changes to the database. Once completed, the local Oracle Database will need to be shut down while the final changes are captured and migrated. This incremental step is considerably less involved than the initial synchronization, so the downtime is shorter. Once the final synchronization is complete, the AWS deployment takes over and the local database is decommissioned.
A third option promises zero downtime. This typically starts with an initial synchronization and then invokes a form of continuous data replication (CDR) to perform complete synchronization of the local and AWS database versions. A variety of tools, including Oracle’s GoldenGate or third-party tools like Dbvisit Replicate and Attunity Replicate, can handle CDR. A business can make the switch to AWS once replication has synchronized the local and AWS databases. The CDR tool will continue to keep the instances synchronized. This option is typically reserved for the largest or most active Oracle database users who cannot tolerate any downtime. However, there is an added cost to use CDR, and continuous replication can potentially affect database or network performance.
What licenses do we need to run Oracle Database on AWS?
To access your data stored on an Oracle database, you will need to know the server and database name that you want to connect to, and you must have access credentials. Once you have created a connection to an Oracle database, you can select data from the available tables and then load that data into your app or document.
In Qlik Sense , you connect to an Oracle database through the Add data dialog or the Data load editor .
In QlikView you connect to an Oracle database through the Edit Script dialog.
Setting up the database properties
The Oracle Connector requires additional configuration for the TNS service. The following two TNS files must be modified for the Oracle environment. Sample versions of these files are available in the sample directory at the end of the path shown below for each of the Qlik Sense and QlikView locations. Or you can get the files from your Oracle configuration.
- tnsnames.ora : set the host addresses and ports for the TNS service.
- sqlnet.ora : set the location of the Oracle wallet.
These files must be placed in one of the following locations for Qlik Sense and QlikView .
Qlik Sense Desktop :
%USERPROFILE%AppDataLocalProgramsCommon FilesQlikCustom DataQvOdbcConnectorPackageoraclelibnetworkadmin
Qlik Sense Enterprise :
C:Program FilesCommon FilesQlikCustom DataQvOdbcConnectorPackageoraclelibnetworkadmin
C:Program FilesCommon FilesQlikTechCustom DataQvODBCConnectorPackageoraclelibnetworkadmin
| Database property | Description | Required |
|---|---|---|
| Host name | Host name to identify the location of the Oracle database | yes unless using TNS Service Name |
| Port | Server port for the Oracle database | yes unless using TNS Service Name |
| Service name | The alias name for the TNS service | yes unless using TNS Service Name |
| Use TNS Service name | Use the TNS name defined in the tnsnames.ora file instead of the TNS alias name. | no |
| TNS Name | The TNS name defined in the tnsnames.ora file. | no |
Authenticating the driver
Qlik Sense : Oracle authentication properties
| Authentication property | Description | ||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Username | User name for the Oracle connection | ||||||||||||||||||
| Password | Password for the Oracle connection | ||||||||||||||||||
| Name | |||||||||||||||||||
| Authentication property | Description |
|---|---|
| Username | User name for the Oracle connection |
| Password | Password for the Oracle connection |
Using Advanced options
| Option | Description | Required |
|---|---|---|
| no | ||
| Value |
Question: I need to offload a huge Oracle database onto flat files and export is too slow. I also find the import utility (imp and impdp) too slow. What are other options for fast data unloading and reloading from Oracle?
Answer: There are many tool, many vendors, and many techniques. I recommend the book “Oracle Utilities” for tips on improving Oracle unload/reload performance and my book “Oracle Tuning: The Definitive Reference” for specific tuning tips for high I/O operations. Also see:
- Data Pump – Here are my tips for speeding up Oracle expdp (Data Pump). You can also tune the import utility (impdb) for faster performance.
- Solid-state disk – SSD is hundreds of times faster than disk, and it’s perfect for super-fast unload/reload migrations.
- SQL Loader – You can make SQL*Loader run very fast with tuning tips.
- Database to database – Here is a clever technique using Linux with direct export/import, with no intermediate flat files.
- CTAS – You can use parallelized “create table as select” over a high-speed database link to quickly move tables between Oracle databases.
- Fact & CoSort – Fast Extract (FACT) for Oracle unloads large tables in parallel to flat files. FACT also writes the file layout metadata that CoSort can use for reload (reorg, ETL) pre-sorts, plus join and aggregate transforms, report generation, field-level security, etc. For more information about these tools click here.
- Vendor tools – Vendors such as BMC and Wisdomforce offer tools for super-fast table unloading from Oracle. Also, unload/reloads happen super-fast on SSD hardware.
| If you like Oracle tuning, see the book “Oracle Tuning: The Definitive Reference”, with 950 pages of tuning tips and scripts. You can buy it direct from the publisher for 30%-off and get instant access to the code depot of Oracle tuning scripts. |
Burleson is the American Team
Note: This Oracle documentation was created as a support and Oracle training reference for use by our DBA performance tuning consulting professionals. Feel free to ask questions on our Oracle forum .
Verify experience! Anyone considering using the services of an Oracle support expert should independently investigate their credentials and experience, and not rely on advertisements and self-proclaimed expertise. All legitimate Oracle experts publish their Oracle qualifications.
Errata? Oracle technology is changing and we strive to update our BC Oracle support information. If you find an error or have a suggestion for improving our content, we would appreciate your feedback. Just e-mail:
and include the URL for the page.
| Is This Answer Correct ? | 0 Yes | 0 No |
We can read a data from IMS DB by through application
program then through SFTP we can convert that data into flat
file and load it into oracle.
| Is This Answer Correct ? | 0 Yes | 0 No |
Tandem COBOL Interview questions (on TACL,SCOBOL,ENSCRIBE,PATHWAY0
can u run online and batch at the same time?
List the type of locks?
How are composite and multiple indexes different?
What is defragmentation and what is its advantage?
Can you explain create an empty file by using sort?
how do we prepare test data using file-aid tool?
What is distributed relational database architecture? And components?
What is the need to code commits in batch programs?
name the vsam file in which the deletion is not possible.
What is linkage section?
Is the order of the when clause significant in an evaluate statement?
- COBOL (1065)
- JCL (1026)
- CICS (755)
- DB2 (1119)
- IMS (143)
- IDMS (107)
- Natural (81)
- ADABAS (26)
- REXX (60)
- Assembler (68)
- CLIST (6)
- QMF (32)
- MVS (25)
- OS390 (10)
- OS 2 (7)
- VSAM (319)
- QSAM (8)
- Sysplex (9)
- IBM MainFrame AllOther (353)
Business Management Interview Questions:: Banking Finance, Business Administration, Funding, Hotel Management, Human Resources, IT Management, Industrial Management, Infrastructure Management, Marketing Sales, Operations Management, Personnel Management, Supply Chain Management.
Engineering Interview Questions :: Aeronautical, Automobile, Bio, Chemical, Civil, Electrical, Electronics Communications, Industrial, Instrumentation, Marine, Mechanical, Mechatronics, Metallurgy, Power Plant.
Visa Interview Questions :: USA Visa, UK Visa, Australia Visa, Canada Visa, Germany Visa, New Zealand Visa.
September 1, 2008 at 7:06 am
I wondered if I could get some advice from people. I need to extract data from an Oracle 8i database which is sat on a Unix server, and load it onto a new database on SQL Server 2005. I am thinking my best option is to create a linked server, then create a SSIS package to import data into the new database. Problem is I have no idea how to create a linked server using a Unix platform. I have linked servers on SQL 2000 that use databases on Windows platforms but none on SQL 2005 using Unix. Does anyone know how I go about this?
Secondly, if anyone can tell me a different and better way of achieving my goal, I’d like to hear about that too. The only other option I am aware of is to go to the Oracle side and extract the data using sqlplus into flat files. I think that will take a long time to format etc.
So any help will be much appreciated!
September 1, 2008 at 7:20 am
Forgot to say, I know there are a lot of posts around this topic in this forum, and I don’t wish for people to repeat everything, but I couldn’t find anything specific to Unix, so felt the need to start a new post. Hope no-one minds!
September 1, 2008 at 5:11 pm
Same topic is discussed here
Regarding Unix, there is no difference in procedures whether the Oracle instance is on Unix or windows. You are dealing with the Oracle database only not the OS.
———————————————————–[font=Arial Black]Time Is Money[/font][font=Arial Narrow] Calculating the Number of Business Hours Passed since a Point of Time[/url][/font][font=Arial Narrow]Calculating the Number of Business Hours Passed Between Two Points of Time [/font]
September 2, 2008 at 7:40 am
I try to avoid loading SSIS whenever it’s practice. If all you want to do is extract data from one table/view into a SQL Server table then my suggestion would be to use the OpenQuery method. Example:
from openquery(LinkedServerName, ‘select col1, col2.. from oracleTableOrView’)
I’ve found that to be the most effective route but this would also work.
SELECT COL1, COL2.
Please note that the syntax above is case sensitive on the Oracle side of the house.
This flow extracts data from a web service and loads into a relational database.
Step 1. Create a source HTTP connection.
Step 2. Create a destination connection for the relational database. Test it.
Step 3. Create a format for the payload returned by web service, most likely JSON or XML.
Step 4. Link connection and format and test the response for the web service in the Explorer.
Step 5. It is most likely that the response is a nested JSON or XML so learn how to flatten the nested dataset.
Step 6. Create a new web service to database flow.
Step 7. Add a new transformation and select from connection, format, and endpoint and to connection and table.
Step 8. Click the MAPPING button and configure the “flattening” for the nested JSON or XML. Most likely it is going to be a Source SQL.
Step 9. Test the transformation. Make sure it returns the flat data set.
Step 10. Add per-field mapping if needed.
Step 11. Configure MERGE (UPSERT) if needed.
Step 12. Save the flow and execute it manually.
Step 13. Schedule the flow to be executed periodically.
I think Hello World of Data Engineering to make an one-to-one copy of a table from the source to the target database by bulk-loading data. The fastest way to achieve this is exporting a table into a CSV file from the source database and importing a CSV file to a table in the target database. With any database, importing data from a flat file is faster than using insert or update statements.
To connect ODBC data source with Python, you first need to install the pyodbc module. Obviously, you need to install and configure ODBC for the database you are trying to connect.
Let’s load the required modules for this exercise. The code here works for both Python 2.7 and 3.
Exporting table to CSV
The function below takes a select query, file path for exported file and connection details. The best practice is to turn on autocommit. Some ODBC will give you an error if this parameter is not there.
def table_to_csv ( sql , file_path , dsn , uid , pwd ) :
”’
This function creates csv file from the query result with ODBC driver
”’
try :
cnxn = pyodbc. connect ( ‘DSN=;UID=;PWD=‘
. format ( dsn , uid , pwd ) , autocommit = True )
print ( ‘Connected to ‘ . format ( dns ) )
# Get data into pandas dataframe
df = pd. read_sql ( sql , cnxn )
# Write to csv file
df. to_csv ( file_path , encoding = ‘utf-8’ , header = True ,
doublequote = True , sep = ‘,’ , index = False )
print ( “CSV File has been created” )
cnxn. close ( )
except Exception as e:
print ( “Error: ” . format ( str ( e ) ) )
sys . exit ( 1 )
The execution example is exporting the city table to a csv file from MySQL. ODBC is set up with MySQL_ODBC as DSN.
Load Table from CSV
The function below takes a csv upload query and connection details to import CSV to a table. Autocommit should be turned on. The local_infile parameter helps MySQL’s LOAD DATA INFILE commands. It may not relevant for other databases. If the parameter is not relevant in the connection for the specific database, it will ignore it. So, you can keep the local_infile parameter for other databases.
def load_csv ( load_sql , dns , uid , pwd ) :
”’
This function will load table from csv file according to
the load SQL statement through ODBC
”’
try :
cnxn = pyodbc. connect ( ‘DSN=;UID=;PWD=‘
. format ( dns , uid , pwd ) , autocommit = True , local_infile = 1 )
print ( ‘Connected to ‘ . format ( dns ) )
cursor = cnxn. cursor ( )
# Execute SQL Load Statement
cursor. execute ( load_sql )
print ( ‘Loading table completed successfully.’ )
cnxn. close ( )
except Exception as e:
print ( “Error: ” . format ( str ( e ) ) )
sys . exit ( 1 )
Let’s load the data exported with the first function into both MySQL and PostgreSQL databases. Each database has SQL syntax for this and you need to pass the statement to the function. MySQL uses the LOAD DATA INFILE command while Postgres uses the copy command.
The world as seen by a fish… another blog by Azahar Machwe
Getting your data from its source (where it is generated) to its destination (where it is used) can be very challenging, especially when you have performance and data-size constraints (how is that for a general statement?).
The standard Extract-Transform-Load sequence explains what is involved in any such Source -> Destination data-transfer at a high level.
We have a data-source (a file, a database, a black-box web-service) from which we need to ‘extract’ data, then we need to ‘transform’ it from source format to destination format (filtering, mapping etc.) and finally ‘load’ it into the destination (a file, a database, a black-box web-service).
In many situations, using a commercial third-party data-load tool or a data-loading component integrated with the destination (e.g. SQL*Loader) is not a viable option. This scenario can be further complicated if the data-load task itself is a big one (say upwards of 500 million records within 24 hrs.).
One example of the above situation is when loading data into a software product using a ‘data loader’ specific to it. Such ‘customized’ data-loaders allow the decoupling of the products’ internal data schema (i.e. the ‘Transform’ and ‘Load’ steps) from the source format (i.e. the ‘Extract’ step).
The source format can then remain fixed (a good thing for the customers/end users) and the internal data schema can be changed down the line (a good thing for product developers/designers), simply by modifying the custom data-loader sitting between the product and the data source.
In this post I will describe some of the issues one can face while designing such a data-loader in Java (1.6 and upwards) for an Oracle (11g R2 and upwards) destination. This is not a comprehensive post on efficient Java or Oracle optimization. This post is based on real-world experience designing and developing such components. I am also going to assume that you have a decent ‘server’ spec’d to run a large Oracle database.
Preparing the Destination
We prepare the Oracle destination by making sure our database is fully optimized to handle large data-sizes. Below are some of the things that you can do at database creation time:
– Make sure you use BIGFILE table-spaces for large databases. BIGFILE table-spaces provide efficient storage for large databases.
– Make sure you have large enough data-files for TEMP and SYSTEM table-space.
– Make sure the constraints, indexes and primary keys are defined properly as these can have a major impact on performance.
For further information on Oracle database optimization at creation time you can use Google (yes! Google is our friend!).
Working with Java and Using JDBC
This is the first step to welcoming the data into your system. We need to extract the data from the source using Java, transforming it and then using JDBC to inject it into Oracle (using the product specific schema).
There are two separate interfaces for the Java component here:
1) Between Data Source and the Java Code
2) Between the Java Code and the Data Destination (Oracle)
Between Data Source and Java Code
Let us use a CSV (comma-separated values) format data-file as the data-source. This will add a bit of variety to the example.
Using the ‘BufferedReader’ (java.io) one can easily read gigabyte size files line by line. This will work best if each line in CSV contains one data row thereby we can read-process-discard the line. Not requiring to store more than a line at a time in memory, will allow your application to have a small memory footprint.
Between the Java Code and the Destination
The second interface is where things get really interesting. Making Java work efficiently with Oracle via JDBC. Here the most important feature while inserting data into the database, that you cannot do without, is batched prepared statement. Using Prepared Statements (PS) without batching is like taking two steps forward and ten steps back. In fact using PS without batching can be worse than using normal statements. Therefore always use PSs, batch them together and execute them as a batch (using executeBatch method).
A point about the Oracle JDBC drivers, make sure the batch size is reasonable (i.e. less than 10K). This is because when using certain versions of the Oracle JDBC driver, if you create a very large batch, the batched insert can fail silently while you are left feeling pleased that you just loaded a large chunk of data in a flash. You will discover the problem only if you check the the row count in the database, after the load.
If the data-load involves sequential updates (i.e. a mix of inserts, updates and deletes) then also batching can be used without destroying the data integrity. Create separate batches for the insert, update and delete prepared statements and execute them in the following order:
- Insert batches
- Update batches
- Delete batches
- Obviously the quickest option, in terms of our data-load, is to drive the truck through the gates (CoPs disabled) and dump the cargo (data) at the destination, without stopping for a check at the gate or after unloading (CoPs enabled for future changes but existing data not validated). This is only possible if the contract with the data-source provider puts the full responsibility for data accuracy with the source.
- The slowest option will be if the truck is stopped at the gates (CoPs enabled), unloaded and each cargo item examined by the gatekeepers (all the inserts checked for CoPs violations) before being allowed inside the destination.
- A compromise between the two (i.e. the middle path) would be to allow the truck to drive into the destination (CoPs disabled), unload the truck and at the time of transferring the cargo to the destination, check the it (CoPs enabled after load and existing data validated).
One thought on “ Efficient Data Load using Java with Oracle ”
Good summation of ETL (IMHO) – thank you.
Do you have any thought son XSLT for the transform method?
Leave a Reply Cancel reply
This site uses Akismet to reduce spam. Learn how your comment data is processed.
This article demonstrates how to import data into an Autonomous Data Warehouse (ADW) or Autonomous Transaction Processing (ATP) service on the Oracle Cloud using the impdp utility.
The examples in this article are based on the Autonomous Data Warehouse (ADW), but the same method works fine for the Automated Transaction Processing (ATP) service too.
Export Your Existing Data
We have a schema called TEST in an Oracle 18c instance on Oracle Database Cloud Service (DBaaS). The schema has two tables (EMP and DEPT), which we want to transfer to the Autonomous Data Warehouse (ADW) or Autonomous Transaction Processing (ATP).
Create a directory object.
Export the schema. The ADW documentation suggests the EXCLUDE and DATA_OPTIONS options in the example below. These options are not necessary for ATP service. For such a small import it is silly to use the PARALLEL clause, but we want to produce multiple dump files.
This resulted in the following dump files.
These dump files were uploaded to an AWS S3 bucket.
I found I had to set the version=12.2 option during the export or I would receive the following error during the import into ADW and ATP.
It would appear there is something interesting about the version of 18c used for ADW.
Object Store Credentials
We need to create a credential containing a username and password for the connection to the object store. If you are an AWS S3 bucket the username and password are as follows.
- username : AWS access key
- password : AWS secret access key
The credentials are dropped and created using the DROP_CREDENTIAL and CREATE_CREDENTIAL procedures of the DBMS_CLOUD package respectively.
Import Data from S3
For the import to work you will have to make a connection from an Oracle client to the ADW database. You can see the necessary setup to do this here.
From an 18c client we can issue the following type of import. The CREDENTIAL option specifies the object store credential to be used for the import. The DUMPFILE option specifies the URIs of the dump files in the object store. The TRANSFORM and EXCLUDE options are the recommended settings in the ADW documentation, but are not necessary for the ATP service. In this case we are using REMAP_SCHEMA to place the objects into a schema called MY_USER on ADW.
From a 12.2 or earlier client we need to set the default credential for the database on ADW.
We can issue the following type of import. The CREDENTIAL option isn’t used and instead the “default_credential:” prefix is used before each object store URI. The rest of the parameters are the same as the previous example.
I ran both these imports from the 18c client on my DBaaS service, but remember, it is initiating the import process on the ADW database.
Get the Log File
If we want to read the contents of the impdp log file we can push it across to the object store using the PUT_OBJECT command.
It can then be downloaded from the object store.
Autonomous Transaction Processing (ATP)
The method for importing data from a dump file in an object store is the same for the Autonomous Transaction Processing (ATP) service as the Autonomous Data Warehouse (ADW) service.
Remember you are able to create a variety of access structures in ATP that you can’t in ADW.
Writing our experiences with AWS, Oracle, PostgreSQL
Oracle Goldengate supports Oracle to PostgreSQL migrations by supporting PostgreSQL as a target database, though reverse migration i.e PostgreSQL to Oracle is not supported. One of the key aspect of these database migrations is initial data load phase where full tables data have to copied to the target datastore. This can be a time consuming activity with time taken to load varying based on the table sizes. Oracle suggests to use multiple Goldengate processes to improve the database load performance or to use native database utilities to perform faster bulk-loads.
To use a database bulk-load utility, you use an initial-load Extract to extract source records from the source tables and write them to an extract file in external ASCII format. The file can be read by Oracle’s SQL*Loader, Microsoft’s BCP, DTS, or SQL Server Integration Services (SSIS) utility, or IBM’s Load Utility ( LOADUTIL ).
Goldengate for PostgreSQL doesn’t provide native file loader support like bcp for MS SQL and sqlloader for Oracle. As an alternative, we can use FORMATASCII option to write data into csv files (or any custom delimiter) and then load them using PostgreSQL copy command .This approach is not automated approach and you will have to ensure that all files are loaded into target database.
In this post, we will evaluate 2 approaches i.e using Multiple replicat Processes and using ASCII dump files with PostgreSQL copy command to load data and compare their performance. Below diagram shows both the approaches
Ref: -
To compare the scenarios, I created a test table with 200M rows(12GB) and used a RDS PostgreSQL instance (db.r3.4xlarge with 10k PIOPS)
Approach 1 : Using Oracle Goldengate multiple replicat processes to load data
In this approach, I used multiple Oracle Goldengate Replicat processes (8) using @range filter to load data into PostgreSQL.
We were able to get 5k inserts/sec per thread and were able to load the table in
88 mins with 8 replicat processes.
One key point to remember is that if you are working with EC2 and RDS databases, you should have EC2 machine hosting trail files and RDS instance in same AZ. During the testing, we noticed that insert rate dropped drastically (
800 insert per sec) when using cross AZ writes. Below is replicat parameter file used for performing data load.
You will need to create additional replicat process files by making change to the range clause.e.g FILTER (@RANGE (2,8)), FILTER (@RANGE (3,8)), etc.
Approach 2: Data load using PostgreSQL copy command
In second approach, we used parameter file with FORMATASCII option(refer to below snippet) for creating a Goldengate Extract process which dumped the data with ‘|’ delimiter and then used PostgreSQL copy command to load data from these dump files.
Extract Parameter file
With above parameter file, Goldengate Extract process would send data to remote system and store the data in dump files. These files are then loaded into PostgreSQL using copy command.
Data load took 21 mins, which is nearly 4x faster than initial approach. If you remove the Primary key index, then it drops the time taken to
Ever thought of creating a webservice your app can consume in no time? Then look no further as Spring boot offers an easy way to create stand-alone, production-grade Spring based Applications in no time. The setup is pretty straight forward often needing less time to get it up and running.
Spring integrates a wide range of different modules in its umbrella, as spring-core, spring-data, spring-web-MVC, e.t.c.¹
The spring web-MVC leverages on the power of Dependency Injection to develop loosely coupled applications which when done properly could lead to effective and efficient applications
In this article, I’ll show you how to create a RESTful webservice with Spring Boot, Spring Data JPA and the data will be persisted in a local Oracle database. We’ll set up the project by using the following:
· Spring boot starter web
· Spring boot starter data JPA
· Oracle database 11g express
· Oracle JDBC driver ojdbc.jar
1. Download Oracle Setup
First download the oracle 11g express database from oracle.com and install it on your local machine. You need to register if you don’t already have an oracle account before you can download the database. Extract the zip file and run the setup file. Follow the installation steps and do well to remember your password as you’ll need it to configure the data source. The default username is “system”.
2. Create a New Project in Intellij IDEA
Create a new project using spring initialzr from IntelliJ and add the following dependencies:
spring web and spring data jpa.
I assume you’ve installed maven, if not go here to set up maven.
3. Install Oracle Driver
The ojdbc.jar provides the necessary drivers and setup for oracle database.
To add the ojdbc driver to your local repository, download the ojdbc7.jar file from oracle.com.
Copy the jar file to any folder, preferably in your C: folder.
Run the following maven command:
For windows users, replace the forward slash with a backslash.
Add the following dependency to the pom.xml file:
Your project’s pom.xml should look like this:
4. Create The Project Structure
Since we are using the MVC architectural pattern which separates the application into model, view and controller, we need to create different packages for controllers, entities(or models), daos (Data Access object), and services. The project structure looks like this:
5. Configure Oracle Datasource on IntelliJ IDEA
On the database tool window, add a new Oracle datasource. Download the missing drivers if you’ve not done so.
Input the username(default is “system”) and password for your local oracle db and test connection.
It should be successful all things being equal. Click Ok!.
6. Add Application Properties
The application.properties file enables spring to know the database configuration and profile to use at runtime.
Navigate to your src/main/resources and add the following to the application.properties file.
7. Create the User Model
Next, we’ll create a User class annoted with the JPA @Entity annotation. This class creates a model which JPA uses to establish a relationship/mapping between the User entity and tables in the Oracle relational DB.
Navigate to the entity package and create a new class called User and add the following:
Next, we create a data access object which is an interface that extends JpaRepository interface. The DAO is used to perform CRUD database operations
Navigate to the dao package and create an interface called UserDao and add the following code:
The service class contains all methods that handle the business logic of the application. Navigate to the service package and create a new java class called UserService . For brevity sake, I’ll only create a service to add a new user and retrieve all users.
Next, we create the User Controller which holds all the REST endpoints. Navigate to the controller package and create a new class called UserController and add the following:
Now the application is ready, open terminal on IntelliJ and run the following maven command:
This creates the User table in the database using the JPA annotations and the ojdbc configurations in the application.properties file. On the terminal logs, you should also see sql syntax on how the tables were created.
We’re going to test the REST endpoints from the UserController using Postman.
On Postman, you can perform a POST request to create a new user on the
Here we POST a User object that has a name and salary. Here’s a screenshot of the POST request and the corresponding Json result:
Also, you can get all users on persisted on the database by performing a get request on the /user/all endpoint. The sample GET request is shown below:
Voila! So that’s basically how you can set up a webservice using spring boot. Thanks for reading.
Don’t forget to drop your questions, contributions, and thoughts in the comment section below.
You can clone the project on github
Ikhiloya/spring-jpa-oracle-demo
spring-jpa-oracle-demo – A sample demo app that shows how to create a RESTful webservice using Spring Boot, Spring Data…
github.com
13. Useful Resources
NOTE : You can contribute your ideas to this project for Spring boot beginners.
Ideas to consider:
1. spring boot with mysql or any other database
2. spring boot with kotlin
3. spring security
4. JUnit Test
5. Integration Test
6. Your own idea would be great…just do it!
I have an Oracle PL/SQL procedure that extracts data into an excel worksheet. I am wondering if there is anyway from the procedure to also set the macros of the excel document that is created?
For example, usually the macros are manually done:
Go to Tools, Macros – Macro. Select the “Automate” macro and click “run”.
However I am trying to automate the manual process. Any suggestions or direction?
I have an Oracle PL/SQL procedure that extracts data into an excel worksheet. I am wondering if there is anyway from the procedure to also set the macros of the excel document that is created?
For example, usually the macros are manually done:
Go to Tools, Macros – Macro. Select the “Automate” macro and click “run”.
However I am trying to automate the manual process. Any suggestions or direction?
I have an Oracle PL/SQL procedure that extracts data into an excel worksheet. I am wondering if there is anyway from the procedure to also set the macros of the excel document that is created?
For example, usually the macros are manually done:
Go to Tools, Macros – Macro. Select the “Automate” macro and click “run”.
However I am trying to automate the manual process. Any suggestions or direction?
Setting a macro would not be possible from PLSQL.
Could you please tell what is that macro going to do?
One suggestion would be to connect to oracle database from Excel using macro and fetch the data in to excel sheet.
This Oracle Database 11g: Data Warehousing Fundamentals training will teach you about the basic concepts of a data warehouse. Explore the issues involved in planning, designing, building, populating and maintaining a successful data warehouse.
- Define the terminology and explain basic concepts of data warehousing.
- Identify the technology and some of the tools from Oracle to implement a successful data warehouse.
- Describe methods and tools for extracting, transforming and loading data.
- Identify some of the tools for accessing and analyzing warehouse data.
- Describe the benefits of partitioning, parallel operations, materialized views and query rewrite in a data warehouse.
- Explain the implementation and organizational issues surrounding a data warehouse project.
- Improve performance or manageability in a data warehouse using various Oracle Database features.
Oracle’s Database Partitioning Architecture
You’ll also explore the basics of Oracle’s Database partitioning architecture, identifying the benefits of partitioning. Review the benefits of parallel operations to reduce response time for data-intensive operations. Learn how to extract, transform and load data (ETL) into an Oracle database warehouse.
Improve Data Warehouse Performance
Learn the benefits of using Oracle’s materialized views to improve the data warehouse performance. Instructors will give a high-level overview of how query rewrites can improve a query’s performance. Explore OLAP and Data Mining and identify some data warehouse implementations considerations.
Use Data Warehousing Tools
During this training, you’ll briefly use some of the available data warehousing tools. These tools include Oracle Warehouse Builder, Analytic Workspace Manager and Oracle Application Express.
Dani Schnider’s Blog
If you work with Data Vault for a data warehouse running in an Oracle database, I strongly recommend to use Oracle 12.2 or higher. Why that? Since Oracle 12c Release 2, join elimination works for more than one join column. This is essential for queries on a Data Vault schema.
I often have discussions with customers or training attendees that have concerns about query performance on Data Vault schemas. Data Vault is a suitable data modeling method for integration and historization of data from different source systems in a data warehouse. But because a Data Vault schema typically contains a high number of tables, a lot of joins are required to select data from all the Hubs, Links and Satellites that are involved in each query. A little-known performance feature introduced with Oracle 12.2 helps to improve the query performance.
In a DOAG presentation this year, we have shown several possibilities how data can be loaded from a Data Vault schema into a Star Schema. In one live demo, I explained the purpose of join elimination in this context. Since Oracle 12.2, multi-column join elimination is supported, as Jonathan Lewis wrote 3 years ago in his blog post Join Elimination 12.2. The benefit of this feature for Data Vault I want to explain in this blog post in detail.
In Data Vault, we typically have multiple Satellite tables attached to a Hub table. A separate Satellite may be created for each source system, for different change frequencies, for additional columns due to new requirements and so on. So, it is common to have many Satellites for one Hub. This has advantages for independent load jobs, data model enhancements, etc., but makes it more difficult to extract data from the Data Vault schema. To reduce the complexity of the extraction jobs or ad-hoc queries, a proven approach is to create (or better: generate) a view layer on top of the Data Vault model. For example, we can create two views for each Hub and its corresponding Satellites:
- The Current View returns the current version of all Satellites for each Hub key
- The Version View (or History View) returns the whole history of the data, with a separate version for each validity period
For good query performance, a common approach in Data Vault is to create a “Point in Time table” (PIT table) for each Hub. I already explained in blog post Loading Dimensions from a Data Vault Model how the PIT table and the view layer can be implemented. A similar approach I used for my demo example at the DOAG presentation.
Let’s assume we have a Hub H_CUSTOMER with three Satellite tables S_CUSTOMER_INFO, S_CUSTOMER_ADDRESS and S_BILLING_ADDRESS. The PIT table PIT_CUSTOMER contains the load date for each Satellite version that is valid in a particular validity range. Details are explained here. On top of these tables, we create a Current View and a Version View.
The two views look almost the same, the only difference is that the Current View contains a filter WHERE pit.load_end_date IS NULL to return only the current versions. The Version View does not apply this filter, but on the other hand contains two additional columns VALID_FROM and VALID_TO to describe the validity range. To show the behaviour of join elimination, I will use the Current View, but it works exactly the same with the Version View.
The Current View for our example contains all attributes of all Satelllites and is created like this:
First, extract the oracle data into sequential files.
Second, decide which indexes are needed for the vsam files.
Next, use idcams to define the new vsam files and required indexes.
Then load the extracted data into the new vsam files.
If this is not what you are looking for, please clarify.
Currently Banned
New User
Joined: 11 Mar 2008
Posts: 4
Location: Bangalore
| Hi , How to Extract Oracle data into Seq file? Yoy are correct but i want to know the utilities and is there any other way to Load the data give me some small example i am . Could you kindly advise. We need this urgently for Sreenadh to start The work. Thanks Madhu |
Currently Banned
New User
Joined: 11 Mar 2008
Posts: 4
Location: Bangalore
| It very Urgent |
Active Member
Joined: 06 Sep 2007
Posts: 792
Location: Chennai, India
| Unload the table to text file and then load it to Mainframes. When I googled with for your requirement, I got many utilities. Extract Data from Incoming Emails and Convert It to DatabaseIf you’re doing business online, you probably maintain a database of your customers, clients, or subscribers. Typically you store the customers’ email addresses, names, order numbers, and purchased products in the database. Plus, you may want to keep the customer’s personal information such as postal addresses, phone numbers, fax numbers and much more in your database. It’s a large volume of data that you need to keep in order — add new customers, remove customers, and update the customer’ information on their request. Have you ever thought how much time you spend on the tasks such as extracting data from emails and pasting it into your database, manually? It can literally take hours. And as far as your online business is growing, you’ll be receiving more and more emails. One day you may find out that it eats up almost all your working time to extract form data from email to Excel, MS Access or any other database. Parse Any Email to Database with G-Lock Email ProcessorSo, why not automate your email processing and convert emails to the database records in minutes? G-Lock Email Processor will extract data from email forms — customer’s name, email address, postal address, order ID, product name they purchased, purchase date, license type, or whatever you tell it to extract — and add it to your database. It can be a local or remote ODBC compatible database such as MS Access, MySQL, MS SQL, Oracle and others. It’s easy to get started with G-Lock Email Processor. You just create an email account within the program from which it will read your emails, add the filter to catch the emails you need, for example, purchase orders, completed web forms, unsubscribe requests, and tell the program how to extract data from email and what it must do with the extracted data — write the data to the database, update existing records, or delete records based on the extracted data. Once set up, the extraction rule will boil down the email to the database you are looking for. For example, each email campaign may generate you a number of unsubscribe requests. You don’t want to waste your time on those recipients but you can’t ignore them either. So, you setup G-Lock Email Processor to catch unsubscribe requests by, for example, the word “Unsubscribe” in the Subject, extract the recipient’s email address, check your database if such email address exists and remove the record associated with this email from the database. While the program is doing the job for you, you can focus on more important aspects of your business, or just have a rest you deserved. The greatest thing is that you don’t even need to start G-Lock Email Processor manually. It runs as a service and automatically starts when you turn your computer to On. You can even make the program work for you 24 hours/7 days a week and have your work done in a timely and accurate manner without spending a single minute of your working time for it. G-Lock Email Processor is a flexible data parser and extractor for converting incoming emails to easy to handle databases. Try it for free and parse your first emails to the database within minutes! In preparing my presentation for the Michigan Oak Table Symposium, I came across AWR extract and load. While these statements are documented in the Oracle manuals (kind of), I don’t see much discussion online, which is a good barometer for the popularity of an item. Not much discussion – not very well known or used. Although the scripts are in $ORACLE_HOME/rdbms/admin in 10.2, they are not documented. One of the frustrations with AWR (and Statspack) has been that diagnosing past performance issues and trending are dependent on keeping a large number of snapshots online. This means more production storage, resource consumption with queries. Would it not be nice to be able to take the AWR data from your production system, load it into a dba repository and then do all your querying? Perhaps even your own custom ETL to pre-aggregate, create custom views and procedures? As of 10.2, Oracle supplies two scripts that enable you to extract and load AWR data into another database (even one already running AWR snapshots). You can even take the AWR data from a 10.2 database on Solaris and load it into an 11.2 database on Windows XP (other variations may work…but these are the two versions I have handy). I also took 11.2 database on Linux and loaded it to the Windows database. Extracting AWR data is pretty straightforward. Login as a dba or appropriately privileged account and run $ORACLE_HOME/rdbms/admin/awrextr.sql . The Data Pump export took 10 – 20 minutes to extract 7 days of AWR data. The files were less than 50 megs and were able to be compressed to less than 1/2 that size. FTP (or scp) the file to the DBA repository server and uncompress it. Make certain that the dump file is stored in a directory that is also defined as an Oracle directory. The AWR load was also fairly straightforward, with one minor wrinkle with the dump file name. The process will then prompt for the staging schema name, the default is AWR_STAGE . If you accept the default, the script will create the AWR_STAGE user after asking you for default tablespaces. Once it has completed the awr load process, the script will drop the AWR_STAGE user. After the process completes, the AWR tables now have new data in them! You can query DBA_HIST_SNAPSHOT or any of the other DBA_HIST views (including DBA_HIST_ACTIVE_SESS_HISTORY ). Unfortunately, the standard AWR and ASH reports use the current database dbid, which won’t be the same as the dbid of the data you have just loaded. You will need to create your own AWR/ASH scripts or modify the ones provided…but that is for the next blog post! BODS Data Migration ETL Process : Extract, Transform & Load (ETL) process view presented below for better understanding. Below diagram shows how the ETL used in stages loading data in SAP client using one of the identified interfaces. Figure 1.0 ETL Process per Object a. Data Extraction & Cleansing
b. Data Transformation
Figure 2.0 ETL Job Flow Each Data migration job (ETL job) consists of component steps as explained bellow:
SQLines Data is an open source, scalable, high performance data transfer, schema conversion and validation tool for Oracle to MariaDB migration. Why SQLines DataSQLines Data benefits: Migration Features You can use SQLines SQL Converter tool to convert stored procedures, functions, triggers, views and other objects. Scalability and High-Performance Designed for DBAs and Enterprise-Class Migrations Logging and Statistics SQLines Data in Command LineYou can use SQLines Data tool in command line. Just launch sqldata.exe on Windows or ./sqldata on Linux with the specified options. For information how to set up Oracle and MariaDB connections, see SQLines Data Connection String Formats.
-t option defines the table name, -sd and -td options (source and target databases) specify the connection strings for Oracle and MariaDB, respectively. This command transfers table emp from Oracle database to MariaDB db_name database located on host. Troubleshooting SQLines DataTroubleshooting SQLines Data for MariaDB to Oracle migration: SQLines Data LogsThere are two main sources that can help you troubleshoot SQLines Data: sqldata.log file contains the detailed information about MariaDB to Oracle migration process. By default, sqldata.log file is located in the current working directory. You can use -log command line option to change its location and file name. sqldata_ddl.sql file contains information about all DDL statements executed in MariaDB during migration. By default, sqldata_ddl.sql file is located in the current working directory. You can use -out command line option to change its location. You can enable trace by specifying -trace=yes in the sqldata.cfg file. The trace file can be helpful for SQLines Data developers to resolve crashes or specific data issues. Data Transfer – The used command is not allowed with this MariaDB versionSometimes you can receive The used command is not allowed with this MariaDB version during the data transfer: The tool uses in-memory LOAD DATA LOCAL INFILE command, and possible reason is that it is not allowed by your MariaDB server configuration. Edit my.cnf (or my.ini on Windows) and set local-infile=1 in [mysqld] section: You have to restart the MariaDB server for the change to take effect. Data Transfer – ORA-01406: fetched column value was truncatedDuring data export from Oracle you can face “ORA-01406: fetched column value was truncated” error. Most likely reason is that the length in bytes of CHAR or VARCHAR2 column stored in the Oracle database is smaller than the length of the column after the conversion at the client side for loading to MariaDB. For example, if you set -char_length_ratio=1.5, the maximum length of all CHAR and VARCHAR columns will be increased by 2x, so CHAR(10) in Oracle becomes CHAR(15) in MariaDB. In this article, we show how to insert data into a database from an HTML form in Django. If you use a ModelForm in Django, you wouldn’t need to worry about inserting data into the database, because this would be done automatically. This is the case with Django ModelForms. However, there are times where you may not want to use Django ModelForms and instead just want to code the form directly in HTML and then insert the data that the user has entered into the database. This is what this article addresses. We will show how to insert the data that a user enters into an HTML form into the database. So let’s create a form in which a user creates a post. This post form will simply take in 2 values, the title of the post and the content of the post. We will then insert this into the database. So the first thing we have to do is create our database (the model) in the models.py file. models.py FileSo the first thing we have to do is create our database. We will call our database, Post. It will only have two fields: title and content. Okay, this is a very basic database. We simply have a title and a content field. After this, we save the file and then, within the command line, run the command, py manage.py makemigrations, and then run the command, py manage.py migrate. createpost.html Template FileNow we’ll create our template file. We’ll call it, createpost.html Within this template file, we are going to have the form where a user can submit a post. It is a simple form that only contains 2 fields: title and content. This is shown in the code below. So this template contains a very basic form that has 2 fields: one which is title and the other which is content. We need a name attribute with each form field because this is how we will extract the data that the user enters into the field. views.py FileLastly, we have our views.py file. In this file, we will take the data that the user has entered into the form fields and insert the data into a database. The following code in the views.py file does this. So this is the heart of our code in which we extract the data from the form fields that the user has entered and insert the data into a database. The first thing we want to do is to make sure that the user has clicked the ‘Post’ button on the template file. We check this with, if request.method == ‘POST’: We want to make sure that the fields aren’t blank. So we use the if statement, if request.POST.get(‘title’) and request.POST.get(‘content’):, to make sure both fields are filled in. After this, we create a variable named post and set it equal to Post() This sets the variable equal to the Post database. Remember that you must import the database at the top of the views.py page. Now in order to insert data into the database use the database name followed with a ., and then the form field- we then set this equal to request.POST.get(‘attribute’) To save data to the title database field, we use the statement, post.title, and set this equal to request.POST.get(‘title’). This takes the title database field and sets it equal to whatever data the user entered into the title form field. To save data to the content database field, we use the statement, post.content, and set this equal to request.POST.get(‘content’). THis takes the content database field and sets it equal to whatever data the user entered into the content form field. We then save the data. Without the statement, post.save(), the data isn’t saved. You then would return any template file that you want to. And this is how we can insert data into a database from an HTML form in Django. What you will learn Learn To: Benefits To You Audience Related Training Course Objectives Course OutlineDevelopment Tools Characteristics of a Data Warehouse Optimizing Star Queries Partitioned Tables and Indexes Operations That Can Be Parallelized Parallel Query Extraction Methods Loading Mechanisms Data transformation The Need for Summary Management Refresh Options What Are Dimensions Query Rewrite: Overview SQL Access Advisor: Usage Model | ||