Data Migration Blogs

  • Home
    Home This is where you can find all the blog posts throughout the site.
  • Categories
    Categories Displays a list of categories from this blog.
  • Tags
    Tags Displays a list of tags that has been used in the blog.
  • Bloggers
    Bloggers Search for your favorite blogger from this site.
  • Team Blogs
    Team Blogs Find your favorite team blogs here.
  • Login
Recent blog posts

Posted by on

Federal agencies have a large mandate – keep one of the largest governments in the world running on a budget. When looking for IT solutions, that means finding software that can do the job without having to raise taxes enough to hang Congress – again.

Having been approached by several federal agencies about PostgreSQL about using our data migration services, we noticed some constant positives come up including:

Budget: We’re sure you’re familiar with this. Most federal agencies had heard that PostgreSQL could help them with their budget constraints but they wanted to make sure they weren’t losing functionality, features or performance by not using a proprietary DBMS. They found that they were not only not losing functionality but also gaining tremendous flexibility without the associated cost.

Multiple vendor support: For security reasons and flexibility, federal agencies are resistant to having one vendor control their database. Multiple scenarios cropped up during our conversations including a vendor being bought by a foreign company or being beholden to one vendor with no flexibility on cost or features. PostgreSQL allows them to have more control over their support and code.

Open Source: Reassurance that PostgreSQL would remain Open Source was key to many of these agencies. The current structure of the PostgreSQL community and input from major contributors worldwide addresses the multiple vendor support concern (see above).

Robust features: A spirited PostgreSQL community means that the product will continue to develop robust features based on the community’s needs and, most importantly, do it in a timely manner. No more product development maps dictated by ivory-tower product managers that may or may not included necessary features.

Specialized modifications: Gaining access to PostgreSQL source code was a key benefit to these agencies.  Instead of having to contend with protected code that cannot be modified or customized without incurring heavy costs, federal agencies can implement, use and customize PostgreSQL code as they see fit.

Looking at that list of benefits, is it any surprise that federal agencies are considering PostgreSQL? It really comes down to three factors – budget, control and flexibility. With its Open Source structure and community, PostgreSQL offers these agencies all three of those things.

Bull has experts that can help you secure your data in the case of natural disaster. Bull’s long history with migrations can have you set-up on a Postgres database efficiently. We also offer support, training and performance analysis. Contact us or visit our website (www.postgresmigrations.com) if you'd like to discuss your project in more detail, and to find out more about our managed services, database migration services and PostgreSQL offerings.

Continue reading
Hits: 6582 0 Comments
0

Surrounded by natural disasters, you’ve probably paused and thought  - how secure is my equipment and my database?  And, if you had the misfortune to suffer a loss because of Hurricane Sandy, you may be wondering how to turn misfortune into opportunity.

First, is it time to consider a move from a proprietary database?  You can save millions by migrating your data from an exclusive or legacy, non-relational database to an open database such as PostgresSQL.  With an open-source database such as PostgresSQL hosted in a low-risk data hosting center, your database is protected and efficiently managed with less cost than maintaining a proprietary database.

The next stage would be keeping your data secure from future disasters.  The three questions you should ask are:

1. If a disaster hits tomorrow, how secure is my hardware?

If your facility doesn’t provide a secure location, consider moving your hardware to a more secure location. The location should be comply with SAS 70 Type II standards for physical security – video surveillance and secured access by authorized personnel only.  It should also be monitored 24/7.

2. Do I have secure back-ups of my data in a separate location?

If you feel your hardware is secure, consider hosting back-ups of your data in a managed server data center.  The data center should be easily accessible and have extensive building management services including power and cooling systems, fire suppression, and a building management system that detects, interprets, and logs technical and security events.

3. If my network goes down, do I have a redundant connection secured?

The hardest part of natural disaster is not being able to connect to your data remotely.  If you have taken the necessary precautions and kept your data secure, make sure that you can access that data remotely. The data center you select should have a monitored, redundant, and stable network connection with multiple access options and entry points.

Bull has experts that can help you secure your data in the case of natural disaster. Bull’s long history with migrations can have you set-up on a Postgres database efficiently. We also offer support, training and performance analysis. Contact us if you'd like to discuss your project in more detail, and to find out more about our managed services, database migration services and PostgreSQL offerings.

Continue reading
Hits: 7827 0 Comments
0

Posted by on

Custom. Proprietary. Exclusive. All words we associate with quality and better performance – custom suits fit better, proprietary software meets our specific needs, exclusive access grants us a social edge.

But is that always true? For vintage cars, probably. But, for an RDBMS? Not always. We put PostgreSQL 9.1 against a proprietary RDBMS and the results are in.

Using the Open Source load test tool, Hammerora, we simulated 500 users, each performing 10,000 transactions against 800 data warehouses and measured sustained throughput. The same platform was used for all tests and repeated multiple times (after reset) ensuring the repeatability of the results.

So, how did PostgresSQL do? In short, consistently better. During our testing, the average Proprietary RDBMS sample showed throughput of 53824 Transactions per Minute (TPM) while PostgreSQL samples averaged 64374 TPM, - an almost 20% higher throughput.

For the full results, (click here).

Contact us if you'd like to discuss your project in more detail, and to find out more about our database migration services and PostgreSQL offerings.

Continue reading
Tagged in: PostgreSQL
Hits: 3867 5 Comments
0

Posted by on

At Bull, we've spent over 40 years providing companies just like yours with data migration services, including major data migrations from both DB2 and Oracle to PostgreSQL. Over that time, we’ve helped organizations cut their costs while improving efficiency. That's why we created our latest set of products and services.

If you are paying high licensing and maintenance costs…
If you are wondering how to derive maximum value from your legacy systems…
If you want to open your enterprise data to the cloud, virtualized environments and mobile devices…

It's time to MOVE IT™.

Modernize, Optimize, Virtualize and Economize Information Technology

Bull now offers a full range of services including database migrations, advanced transaction processing and training & support. We developed MOVE IT enterprise solutions to help organizations reduce costs while opening enterprise data to cloud, mobile devices and virtualized environments.

Our enterprise solutions specifically support:

  1. Database Migrations: We can help you seamlessly migrate your legacy or other relational database to PostgreSQL, the world's most advanced open source database. Our services include consulting, implementation and optimization services for both infrastructure and applications.
  2. LiberTP Transaction Processing: Our next generation transaction processor enables easy porting of older UNIX-based COBOL and C transaction processing applications to an open environment.
  3. Training & Support: Our support services include PostgreSQL support subscriptions; database design and build assessments; database performance and tuning services; and forms and reports migration services.

 

MOVE IT products and services can work effectively standalone by solving specific challenges or work together to seamlessly provide enterprise IT clients with multiple benefits. Whether you manage work internally or outsource, MOVE IT services and software can free you from high licensing and maintenance costs while flourishing in competitive business environments.

Contact us if you'd like to discuss your project in more detail, and to find out more about our database migration services and PostgreSQL offerings.

Continue reading
Hits: 1555 0 Comments
0

Posted by on

Ken Rosensteel presented Bull's PL/SQL-to-PL/pgSQL tool at PGEast last year. Now a free public demonstration version of this tool is available. The demo tool provides a wide range of translation possibilities (but not the full tool capabilities). The user just pastes in the PL/SQL Stored Procedure text itself for a quick translation, and then the user can evaluate the results.

You are welcome to try out the free demo tool at http://www.postgresmigrations.com/stored-procedure-tool.html, and we request your feedback. You can use the demo tool for 30 days, with a limit of 2000 lines of PL/SQL Stored Procedures.

Continue reading
Hits: 2387 3 Comments
0

Posted by on

As mentioned in an earlier post, we're announcing the start of a PUG for Arizona. The first meeting will be Thursday, March 29, from 5 to 7 at Bull's facility in Northwest Phoenix. Refreshments will be provided.

You can join and get onto the mailing list and RSVP for the first meeting at https://www.bigtent.com/groups/azpug

Continue reading
Hits: 1547 0 Comments
0

Posted by on

Why aren't more people/companies using PostgreSQL? Why aren't they moving from proprietary databases to PostgreSQL?

Possible answers: I haven't heard of PostgreSQL. I don't think PostgreSQL will meet my needs. I don't know how to get away from my database vendor.

I'd like to hear some of the reasons for not considering PostgreSQL when creating or migrating a database.

Continue reading
Hits: 4036 11 Comments
0

Posted by on

We want to set up a Postgres Users group in the Arizona/Southwest area. Are you interested? We're in metro Phoenix. Any local Postgres users out there?

Continue reading
Hits: 1431 1 Comment
0

Posted by on

We should be seeing all ISVs (Independent Software Vendors) migrating their products to PostgreSQL. Why? Because it's one of the fastest ways to increase their ROI and decrease their current customers' costs.

ISVs use a database in their product and typically require their customers to pay for the software license and support. Database software support is an annual expense, so customers have to pay for it, regardless of any special pricing arrangements with the database vendor.

So why aren't we seeing ISVs using PostgreSQL instead of proprietary databases? Some of the reasons – and our responses – follow:

• "PostgreSQL wasn't an option when we developed our software."

Maybe PostgreSQL wasn't an option back then; however, if your software is a viable product and you are doing updates, you should plan to include PostgreSQL. Migrating the software to use PostgreSQL - from any of the big proprietary databases such as Oracle or SQL Server - is easier than ever. With tools provided at the PostgreSQL.org site, most migrations can be done in a few days. If your software uses features like Stored Procedures to increase your database efficiency, our tools, available at postgresmigrations.com that are available from Postgres migrations can help migrate those expensive databases to PostgreSQL.

• "We allow our customers to choose which proprietary database they want."

Customers should be able to choose between proprietary databases and Open databases. PostgreSQL has the enterprise features that match any of the proprietary databases. Thinking of your customers' bottom line and allowing them to save money on yearly licensing and support costs will enhance your reputation.

• "The customer feels more confident in the software when paying for a proprietary database."

Customers will be more confident using PostgreSQL by knowing that they have options for support and that other customers are using it. In addition you can provide current customers with options for how to migrate their current proprietary database to PostgreSQL. This lets your customers know that you are their partner and are looking out for them by helping them to save money. You can provide your customers with potential migration specialists or refer them to the PostgreSQL site.

For a small investment to provide PostgreSQL support in their next future release, ISVs create a significant differentiator between themselves and the competition. ISVs can build a real feeling of partnership with their current customers by providing a lower cost database alternative.

ISVs such as Bowman Systems LLC have already done it, and companies such as VMware have it on their roadmap. What are the rest of the ISVs doing?

 

Continue reading
Hits: 1461 0 Comments
0

The obvious advantage of migrating from a proprietary database to PostgreSQL is the absence of a yearly licensing fee. Regardless of what you currently pay for your licensing, it's cheaper – i.e., zero dollars - to use PostgreSQL. Also, with multiple vendors to choose from, the Maintenance cost can be substantially lower.

Companies that foresee the potential for large savings from this type of migration can consider migrating their databases to PostgreSQL or using PostgreSQL with new projects.

VMware's introduction of the vFabric Data Director makes it possible to save on the Database licensing cost, database operational costs, as well as the reduced infrastructure costs brought by consolidating with VMware.

VMware's introduction of support for greater than 8 CPUs facilitates virtualization of the largest Database Servers. Very large enterprise users can now move up to extremely large servers, such as the novascale bullion from Bull, which can support up to 160 cores in one SMP system. Their application servers, database servers, mail server and Web Servers can run from one consolidated server, saving on multiple costs (both hardware and software). When PostgreSQL is used exclusively on these massive servers, there are no database licensing issues. That's a striking contrast to the proprietary database vendors who require licenses on all cores in the server.

With this two-pronged approach to Database cost savings, it's now possible to really leverage your IT dollars to get better performance and substantially lower cost.

 

Continue reading
Hits: 2160 0 Comments
0

Posted by on

We had an excellent attendance and a good discussion today at the PGEast Presentation

The presentation covered experiences with large customer database conversions we have recently completed and also described a new tool that was developed to convert Oracle Stored Procedures in the form of PL/SQL to PostgreSQL PL/pgSQL

The presentation is linked:

Large Customers Want PostgreSQL Too !!

Continue reading
Hits: 1366 0 Comments
0

Posted by on

This is one of a series of blogs related to the automated conversion of PL/SQL to PL/pgSQL.

One of the Oracle stored procedure conversion topics that we will be presenting at the PGEast conference in New York is that of CONNECT BY query translation. In this case the issue is mainly the SQL itself. In other blogs I will discuss solutions for more general topics like Oracle Packages and Global Variables.

See our presentation: Large Customers Want PostgreSQL Too !

https://www.postgresqlconference.org/content/large-customers-want-postgresql-too

Linked List Processing

Frequently, the PL/SQL we have translated contained SELECT statements that included the CONNECT BY option. In many cases this feature was used to identify related documents in a document table where the documents were grouped together by a singly-linked list and the CONNECT BY feature was used to traverse the list.

To illustrate this concept consider the following table:

create table songList (

   artistId int not null,
   id int not null,
   priorId int,
   title varchar2(20) );

If we assign artistId of 401 to Taylor Swift, then we can map some songs from Taylor Swift’s second album into this table starting with id 201 and begin another list which will ultimately link songs from her next album:

insert into songList values (401, 201,NULL, ‘Fearless’);
insert into songList values (401, 202, 201, ‘Fifteen’);
insert into songList values (401, 203, 202, ‘Love Song’);
insert into songList values (401, 205, 203, ‘White Horse’);
insert into songList values (401, 206, 205, ‘You Belong With Me’);
insert into songList values (401, 301,NULL, ‘unknown’);

The following Oracle CONNECT BY query could be used to list the songs in Taylor Swift’s second album (that is the 200 series):

   SELECT artistId, id, title
    FROM songList
   WHERE artistId = 401
   START WITH id = 201
   CONNECT BY PRIOR id = priorId;

Which generates a result of:

  ARTISTID     ID  TITLE
========== ======   ====================
       401    201  Fearless
       401    202  Fifteen
       401    203  Love Song
       401    205  White Horse
       401    206  You Belong With Me

To process the linked list in the same manner using Postgres one must use a Recursive query. My translation tool is capable of translating the prior Oracle CONNECT BY query into a Recursive query automatically. When the prior SELECT query is processed by my translator the following is produced:

  WITH RECURSIVE rqName (
      artistId, id, title)
  AS ( SELECT      == the anchor query
      artistId,
      id,
      title
  FROM
      songList
  WHERE
      id = 201
  UNION ALL
   SELECT      == the recursive query
      tn.artistId,
       tn.id,
      tn.title
  FROM
      rqName tp, songList tn
  WHERE
      tp.id = tn.priorId
  )
  SELECT artistId, id, title
   FROM rqName
  WHERE artistId = 401;

The anchor query is responsible for finding the START WITH row. The recursive query then UNIONs its results which are dependent upon the join between the most recent result’s id (tp.id) and the current song’s link field (tn.priorId). The query interates through the list of ids until it encounters the id of 301 which does not have a priorId.

The restriction artistId = 401 could be added to the anchor query. I have found through experimentation that Postgres’ performance is improved when written in the described manner. This improvement, noted by using EXPLAIN ANALYZE is significant when a lot of data is involved.

Tree Hierarchy Processing

The prior recursive query as coded was sufficient to process one hierarchy because of the singly-linked list. A common requirement is that more than one hierarchy, or multiple branches of a tree, must be processed. My translator can make the Recursive query code modifications required to support this for Postgres optionally.

For example, consider the manager / employee relationships in the emp table of the Oracle demo database.

   select ename, empno, mgr from emp;

   ENAME       EMPNO        MGR
   ====== ========== ==========
    SMITH       7369       7902
    ALLEN       7499       7698
     WARD       7521       7698
    JONES       7566       7839
   MARTIN       7654       7698
    BLAKE       7698       7839
    CLARK       7782       7839
    SCOTT       7788       7566
     KING       7839
   TURNER       7844       7698
    ADAMS       7876       7788
    JAMES       7900       7698
     FORD       7902       7566
   MILLER       7934       7782

One can use an Oracle CONNECT BY to generate the manager to employee hierarchy:

 SELECT ename, empno, mgr FROM emp
  START WITH mgr IS NULL
  CONNECT BY PRIOR empno = mgr;

   ENAME       EMPNO        MGR
   ====== ========== ==========
     KING       7839
    JONES       7566       7839
    SCOTT       7788       7566
    ADAMS       7876       7788
     FORD       7902       7566
    SMITH       7369       7902
    BLAKE       7698       7839
    ALLEN       7499       7698
     WARD       7521       7698
   MARTIN       7654       7698
   TURNER       7844       7698
    JAMES       7900       7698
    CLARK       7782       7839
   MILLER       7934       7782

The equivalent Recursive query produced from the Oracle CONNECT BY query using my translator with the hierarchy option enabled is:

 WITH RECURSIVE rqName (
     ename, empno, mgr, arrHierarchy)
 AS ( SELECT         == the anchor query
     ename,
     empno,
     mgr,
     ARRAY[coalesce(empno,0)]     == array for perserving the
 FROM
     emp
 WHERE
     mgr IS NULL
 UNION ALL
   SELECT          == the recursive query
     tn.ename,
     tn.empno,
     tn.mgr,
     arrHierarchy || tn.empno     == append to perserve the hierarchy
 FROM
     rqName tp, emp tn
 WHERE
     tp.empno = tn.mgr
 )
 SELECT ename, empno, mgr
   FROM rqName
  ORDER BY arrHierarchy;

The results for this query are identical to that of the Oracle CONNECT BY.

The key to processing branches of the tree hierarchy is to capture the “path” in the array named arrHierarchy then sort the results based on this path. The path was accumulated in the array using the concat operator in the code:

arrHierarchy || tn.empno

Path Display

The path will be displayed if the Oracle function SYS_CONNECT_BY_PATH is referenced in the SELECT list. The means that the array arrHierarchy will be available and probably should be cast to VARCHAR.

Level Emulation

My translator also can customize the Recursive query to display the equivalent to the Oracle LEVEL pseudo column when the SELECT list includes the Oracle LEVEL keyword. The following is the code generated for the query that evaluates the emp table when LEVEL is selected for Oracle:

 SELECT ename, empno, mgr, LEVEL FROM emp
  START WITH mgr IS NULL
  CONNECT BY PRIOR empno = mgr;

   ENAME       EMPNO        MGR  LEVEL
   ====== ========== ========== ======
     KING       7839                 1
    JONES       7566       7839      2
    SCOTT       7788       7566      3
    ADAMS       7876       7788      4
     FORD       7902       7566      3
    SMITH       7369       7902      4
    BLAKE       7698       7839      2
    ALLEN       7499       7698      3
     WARD       7521       7698      3
   MARTIN       7654       7698      3
   TURNER       7844       7698      3
    JAMES       7900       7698      3
    CLARK       7782       7839      2
   MILLER       7934       7782      3

The translation of the prior query with the LEVEL specification follows:

 WITH RECURSIVE rqName (
     ename, empno, mgr, LEVEL, arrHierarchy)
 AS ( SELECT       == the anchor query
     ename,
     empno,
     mgr,
     1,
     ARRAY[coalesce(empno,0)]   == array for perserving the hierarchy order
 FROM
     emp
 WHERE
     mgr IS NULL
 UNION ALL
  SELECT       == the recursive query
     tn.ename,
     tn.empno,
     tn.mgr,
     tp.LEVEL + 1,
     arrHierarchy || tn.empno     == append to perserve the hierarchy
 FROM
     rqName tp, emp tn
 WHERE
     tp.empno = tn.mgr
 )
 SELECT ename, empno, mgr, LEVEL
  FROM rqName
  ORDER BY arrHierarchy;

Conclusion

Rewriting CONNECT BY queries can be a very challenging task to perform by hand. My translation tool can relieve much of the work involved for most of the forms of CONNECT BY that we have encountered. There can be some forms that require special hand recoding such as CONNECT BYs in a subquery (because Postgres does not support recursive queries in a subquery). Nevertheless, the Oracle CONNECT BY needn’t be viewed as an insurmountable hurdle when considering a PL/SQL to PL/pgSQL migration.

Continue reading
Hits: 2759 0 Comments
0