Feed aggregator

Oracle GoldenGate 23c “Free” – What you need to know!

DBASolved - Mon, 2023-09-25 08:54

Last week at Oracle Cloud World, the Oracle GoldenGate Product team announced the latest version of Oracle GoldenGate “Free”!  This […]

The post Oracle GoldenGate 23c “Free” – What you need to know! appeared first on DBASolved.

Categories: DBA Blogs

LLama 2 LLM for PDF Invoice Data Extraction

Andrejus Baranovski - Mon, 2023-09-25 02:47
I show how you can extract data from text PDF invoice using LLama2 LLM model running on a free Colab GPU instance. I specifically explain how you can improve data retrieval using carefully crafted prompts.

 

A Performance Comparison between Oracle’s REGEXP_SUBSTR and SUBSTR functions

The Anti-Kyte - Mon, 2023-09-25 01:30

Oracle SQL offers support for POSIX regular expressions by means of a suite of native functions.

Darth Sidious, light-saber in hand, imparting ancient Sith wisdom regarding regular expressions.Regular Expressions are a pathway to many abilities that some consider…unnatural.

Whilst providing the ability for complex search conditions that regular expressions offer, REGEXP_SUBSTR has acquired a reputation for being a fair bit slower when compared to the standard SUBSTR function.

What I’m going to demonstrate here is :

  • how SUBSTR seems generally to be faster than REGEXP_SUBSTR
  • how the performance of REGEXP_SUBSTR can improve dramatically when used with INSTR
  • REGEXP_SUBSTR performs better when it matches the start of a string

To start with though, well discover why you’ll never see a Sith Lord on Sesame Street ( hint : it’s the way they count in a Galaxy Far, Far Away)…

A Test Table

We have a two-column table that looks like this :

create table star_wars_numbers
(
    id number generated always as identity,
    episode varchar2(4000)
)
/

insert into star_wars_numbers( episode)
values('Four');

insert into star_wars_numbers( episode)
values('Four|Five');

insert into star_wars_numbers( episode)
values('Four|Five|Six');

insert into star_wars_numbers( episode)
values('Four|Five|Six|One');

insert into star_wars_numbers( episode)
values('Four|Five|Six|One|Two');

insert into star_wars_numbers( episode)
values('Four|Five|Six|One|Two|Three');

insert into star_wars_numbers( episode)
values('Four|Five|Six|One|Two|Three|Seven');

insert into star_wars_numbers( episode)
values('Four|Five|Six|One|Two|Three|Seven|Three-and-a-bit');

insert into star_wars_numbers( episode)
values('Four|Five|Six|One|Two|Three|Seven|Three-and-a-bit|Three and a bit less of a bit');

insert into star_wars_numbers( episode)
values('Four|Five|Six|One|Two|Three|Seven|Three-and-a-bit|Three and a bit less of a bit|Eight');

insert into star_wars_numbers( episode)
values('Four|Five|Six|One|Two|Three|Seven|Three-and-a-bit|Three and a bit less of a bit|Eight|Nine');

commit;

Whilst the contents of the EPISODES column may be the result of arcane Sith ritual, Moof-Milker activity in the design process cannot be ruled out.

Either way, the EPISODES column contains up to 11 values, with multiple columns being separated by a ‘|’.

Extracting each value in SQL

Using SUBSTR and INSTR to separate out the values in EPISODES is, as ever, quite a bit of typing…

select 
    id, 
    episode, 
    substr
    (
        episode, 
        1, 
        case 
            when instr(episode, '|') > 0 then instr(episode, '|') -1 
            else length(episode) 
        end
    )  as "A New Hope",
    case when instr(episode, '|', 1, 1) > 0 then
        substr
        (
            episode, 
            instr(episode, '|', 1, 1) + 1, -- start
            case 
                when instr(episode, '|', 1, 2) > 0 then instr(episode, '|', 1,2) -1 
                else length(episode) 
            end
            -
            instr(episode, '|', 1, 1)   
        ) 
    end as "The Empire Strikes Back",
    case when instr(episode, '|', 1, 2) > 0 then
        substr( episode, instr(episode, '|', 1, 2) + 1,
            case when instr(episode, '|', 1,3) > 0 then instr(episode, '|',1,3) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 2))
    end as "Return of the Jedi",    
    case when instr(episode, '|', 1, 3) > 0 then
        substr( episode, instr(episode, '|', 1, 3) + 1,
            case when instr(episode, '|', 1,4) > 0 then instr(episode, '|',1,4) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 3))
    end as "Phantom Menace",
    case when instr(episode, '|', 1, 4) > 0 then
        substr( episode, instr(episode, '|', 1, 4) + 1,
            case when instr(episode, '|', 1,5) > 0 then instr(episode, '|',1,5) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 4))
    end as "Attack of the Clones",
    case when instr(episode, '|', 1, 5) > 0 then
        substr( episode, instr(episode, '|', 1, 5) + 1,
            case when instr(episode, '|', 1,6) > 0 then instr(episode, '|',1,6) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 5))
    end as "Revenge of the Sith",
    case when instr(episode, '|', 1, 6) > 0 then
        substr( episode, instr(episode, '|', 1, 6) + 1,
            case when instr(episode, '|', 1,7) > 0 then instr(episode, '|',1,7) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 6))
    end as "The Force Awakens",
    case when instr(episode, '|', 1, 7) > 0 then
        substr( episode, instr(episode, '|', 1, 7) + 1,
            case when instr(episode, '|', 1,8) > 0 then instr(episode, '|',1,8) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 7))
    end as "Rogue One",        
    case when instr(episode, '|', 1, 8) > 0 then
        substr( episode, instr(episode, '|', 1, 8) + 1,
            case when instr(episode, '|', 1,9) > 0 then instr(episode, '|',1,9) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 8))
    end as "Solo",        
    case when instr(episode, '|', 1, 9) > 0 then
        substr( episode, instr(episode, '|', 1, 9) + 1,
            case when instr(episode, '|', 1, 10) > 0 then instr(episode, '|',1, 10) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 9))
    end as "The Last Jedi",        
    case when instr(episode, '|', 1, 10) > 0 then
        substr( episode, instr(episode, '|', 1, 10) + 1,
            case when instr(episode, '|', 1, 11) > 0 then instr(episode, '|',1, 11) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 10))
    end as "The Rise of Skywalker"                
from star_wars_numbers
/

…although these days ( version 12c and later), we can save ourselves a bit of effort by using an inline function…

with function extract_episode( i_string in varchar2, i_pos in pls_integer)
    return varchar2
is
begin
    if i_pos = 1 then
        return substr(i_string, 1, case when instr(i_string, '|') > 0 then instr(i_string, '|') - 1 else length(i_string) end);
    end if;    
    return
        case when instr(i_string, '|', 1, i_pos -1 ) > 0 then
            substr( i_string, instr(i_string, '|', 1, i_pos -1) + 1,
                case when instr(i_string, '|', 1, i_pos) > 0 then instr(i_string, '|',1, i_pos) - 1 else length(i_string) end 
                -
                instr( i_string, '|', 1, i_pos - 1))
        end;
end;
select 
id, episode, 
    extract_episode( episode, 1) as "A New Hope",
    extract_episode( episode, 2) as "Empire Strikes Back",
    extract_episode( episode, 3) as "Return of the Jedi",
    extract_episode( episode, 4) as "The Phantom Menace",
    extract_episode( episode, 5) as "Attack of the Clones",
    extract_episode( episode, 6) as "Revenge of the Sith",
    extract_episode( episode, 7) as "The Force Awakens",
    extract_episode( episode, 8) as "Rogue One",
    extract_episode( episode, 9) as "Solo",
    extract_episode( episode, 10) as "The Last Jedi",
    extract_episode( episode, 11) as "The Rise of Skywalker"
from star_wars_numbers
/
    
    

Whether you find the equivalent regexp query more elegant or just horrible is somewhat subjective :

select 
    regexp_substr(episode, '[^|]+') as "A New Hope",
    regexp_substr(episode, '[^|]+',1, 2) as "The Empire Strikes Back",
    regexp_substr(episode, '[^|]+',1, 3) as "Return of the Jedi",
    regexp_substr(episode, '[^|]+',1, 4) as "The Phantom Menace",
    regexp_substr(episode, '[^|]+',1, 5) as "Attack of the Clones",
    regexp_substr(episode, '[^|]+',1, 6) as "Revenge of the Sith",
    regexp_substr(episode, '[^|]+',1, 7) as "The Force Awakens",
    regexp_substr(episode, '[^|]+',1, 8) as "Rogue One",
    regexp_substr(episode, '[^|]+',1, 9) as "Solo",
    regexp_substr(episode, '[^|]+',1, 10) as "The Last Jedi",
    regexp_substr(episode, '[^|]+',1, 11) as "The Rise of Skywalker"
from star_wars_numbers
/

Irrespective of aesthetic considerations, I should explain the regexp in use here, if only so I can understand it when I read this a few months from now :

[^|] - match any character that isn't pipe ("|")
+ - match one or more times

Next, we need to find out how the regexp function stacks up when it comes to performance, and we’re not going to find that with a mere 11 rows of data…

Performance Test Setup

Let’s make some use of all that space in the Galaxy Far, Far Away…

create table galaxy (
    id number generated always as identity,
    episode varchar2(4000))
/

begin
    for i in 1..100000 loop
        insert into galaxy(episode)
        select episode from star_wars_numbers;
        
        commit;
    end loop;
end;
/

exec dbms_stats.gather_table_stats(user, 'galaxy');

All of the following tests were run on Oracle 19c Enterprise Edition running on Oracle Linux.

Everything was run in a SQL*Plus session from the command line on the database server.

The queries were run in SQL*Plus with the following settings :

set lines 130
clear screen
set autotrace trace
set timing on

Each query was executed at least twice consecutively to ensure that results were not skewed by the state of the buffer cache.

It’s also worth noting that, I found no low-level explanation for the performance discrepancies between the two functions when trawling through trace files. Therefore, I’ve concentrated on elapsed time as the main performance metric in these tests.

Test 1 – All Fields extracted in the Select Clause

Let’s start with the SUBSTR function ( referred to as “Standard” henceforth) :

select 
    id, 
    substr
    (
        episode,  -- input string
        1,  -- start position
        case 
            when instr(episode, '|') > 0 then instr(episode, '|') -1 
            else length(episode) 
        end -- number of characters to extract
    )  as "A New Hope",
    case when instr(episode, '|', 1, 1) > 0 then
        substr
        (
            episode, 
            instr(episode, '|', 1, 1) + 1, -- start
            case 
                when instr(episode, '|', 1, 2) > 0 then instr(episode, '|', 1,2) -1 
                else length(episode) 
            end
            -
            instr(episode, '|', 1, 1)   
        ) 
    end as "The Empire Strikes Back",
    case when instr(episode, '|', 1, 2) > 0 then
        substr( episode, instr(episode, '|', 1, 2) + 1,
            case when instr(episode, '|', 1,3) > 0 then instr(episode, '|',1,3) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 2))
    end as "Return of the Jedi",    
    case when instr(episode, '|', 1, 3) > 0 then
        substr( episode, instr(episode, '|', 1, 3) + 1,
            case when instr(episode, '|', 1,4) > 0 then instr(episode, '|',1,4) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 3))
    end as "Phantom Menace",
    case when instr(episode, '|', 1, 4) > 0 then
        substr( episode, instr(episode, '|', 1, 4) + 1,
            case when instr(episode, '|', 1,5) > 0 then instr(episode, '|',1,5) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 4))
    end as "Attack of the Clones",
    case when instr(episode, '|', 1, 5) > 0 then
        substr( episode, instr(episode, '|', 1, 5) + 1,
            case when instr(episode, '|', 1,6) > 0 then instr(episode, '|',1,6) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 5))
    end as "Revenge of the Sith",
    case when instr(episode, '|', 1, 6) > 0 then
        substr( episode, instr(episode, '|', 1, 6) + 1,
            case when instr(episode, '|', 1,7) > 0 then instr(episode, '|',1,7) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 6))
    end as "The Force Awakens",
    case when instr(episode, '|', 1, 7) > 0 then
        substr( episode, instr(episode, '|', 1, 7) + 1,
            case when instr(episode, '|', 1,8) > 0 then instr(episode, '|',1,8) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 7))
    end as "Rogue One",        
    case when instr(episode, '|', 1, 8) > 0 then
        substr( episode, instr(episode, '|', 1, 8) + 1,
            case when instr(episode, '|', 1,9) > 0 then instr(episode, '|',1,9) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 8))
    end as "Solo",        
    case when instr(episode, '|', 1, 9) > 0 then
        substr( episode, instr(episode, '|', 1, 9) + 1,
            case when instr(episode, '|', 1, 10) > 0 then instr(episode, '|',1, 10) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 9))
    end as "The Last Jedi",        
    case when instr(episode, '|', 1, 10) > 0 then
        substr( episode, instr(episode, '|', 1, 10) + 1,
            case when instr(episode, '|', 1, 11) > 0 then instr(episode, '|',1, 11) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 10))
    end as "The Rise of Skywalker"                
from galaxy
/

Runnnig this query, we get :

1100000 rows selected.

Elapsed: 00:00:20.32

Execution Plan
----------------------------------------------------------                                                                        
Plan hash value: 3574103611                                                                                                       
                                                                                                                                  
----------------------------------------------------------------------------                                                      
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |                                                      
----------------------------------------------------------------------------                                                      
|   0 | SELECT STATEMENT  |        |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
|   1 |  TABLE ACCESS FULL| GALAXY |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
----------------------------------------------------------------------------                                                      


Statistics
----------------------------------------------------------                                                                        
          0  recursive calls                                                                                                      
          0  db block gets                                                                                                        
      80459  consistent gets                                                                                                      
       7616  physical reads                                                                                                       
          0  redo size                                                                                                            
   35857676  bytes sent via SQL*Net to client                                                                                     
     811886  bytes received via SQL*Net from client                                                                               
      73335  SQL*Net roundtrips to/from client                                                                                    
          0  sorts (memory)                                                                                                       
          0  sorts (disk)                                                                                                         
    1100000  rows processed                                                                                                       

Before we look at the REGEXP_SUBSTR equivalent, it’s probably worth considering the more streamlined in-line function version of this query :

with function extract_episode( i_string in varchar2, i_pos in pls_integer)
    return varchar2
is
begin
    return
        case when instr(i_string, '|', 1, i_pos -1 ) > 0 then
            substr( i_string, instr(i_string, '|', 1, i_pos -1) + 1,
                case when instr(i_string, '|', 1, i_pos) > 0 then instr(i_string, '|',1, i_pos) - 1 else length(i_string) end 
                -
                instr( i_string, '|', 1, i_pos - 1))
        end;
end;
select 
    id, 
    substr
    (
        episode, 
        1, 
        case 
            when instr(episode, '|') > 0 then instr(episode, '|') -1 
            else length(episode) 
        end
    ) as "A New Hope",
    extract_episode( episode, 2) as "Empire Strikes Back",
    extract_episode( episode, 3) as "Return of the Jedi",
    extract_episode( episode, 4) as "The Phantom Menace",
    extract_episode( episode, 5) as "Attack of the Clones",
    extract_episode( episode, 6) as "Revenge of the Sith",
    extract_episode( episode, 7) as "The Force Awakens",
    extract_episode( episode, 8) as "Rogue One",
    extract_episode( episode, 9) as "Solo",
    extract_episode( episode, 10) as "The Last Jedi",
    extract_episode( episode, 11) as "The Rise of Skywalker"
from galaxy
/

Whilst it’s a bit more readable, it’s also a lot slower :

1100000 rows selected.

Elapsed: 00:00:41.76

Execution Plan
----------------------------------------------------------                                                                        
Plan hash value: 3574103611                                                                                                       
                                                                                                                                  
----------------------------------------------------------------------------                                                      
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |                                                      
----------------------------------------------------------------------------                                                      
|   0 | SELECT STATEMENT  |        |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
|   1 |  TABLE ACCESS FULL| GALAXY |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
----------------------------------------------------------------------------                                                      


Statistics
----------------------------------------------------------                                                                        
          0  recursive calls                                                                                                      
          0  db block gets                                                                                                        
      80459  consistent gets                                                                                                      
       7616  physical reads                                                                                                       
          0  redo size                                                                                                            
   35857676  bytes sent via SQL*Net to client                                                                                     
     810042  bytes received via SQL*Net from client                                                                               
      73335  SQL*Net roundtrips to/from client                                                                                    
          0  sorts (memory)                                                                                                       
          0  sorts (disk)                                                                                                         
    1100000  rows processed                                                                                                       

REGEXP_SUBSTR however, takes slow to a new level…

select id, 
    regexp_substr(episode, '[^|]+') as "A New Hope",
    regexp_substr(episode, '[^|]+',1, 2) as "The Empire Strikes Back",
    regexp_substr(episode, '[^|]+',1, 3) as "Return of the Jedi",
    regexp_substr(episode, '[^|]+',1, 4) as "The Phantom Menace",
    regexp_substr(episode, '[^|]+',1, 5) as "Attack of the Clones",
    regexp_substr(episode, '[^|]+',1, 6) as "Revenge of the Sith",
    regexp_substr(episode, '[^|]+',1, 7) as "The Force Awakens",
    regexp_substr(episode, '[^|]+',1, 8) as "Rogue One",
    regexp_substr(episode, '[^|]+',1, 9) as "Solo",
    regexp_substr(episode, '[^|]+',1, 10) as "The Last Jedi",
    regexp_substr(episode, '[^|]+',1, 11) as "The Rise of Skywalker"
from galaxy
/
1100000 rows selected.

Elapsed: 00:01:27.25

Execution Plan
----------------------------------------------------------                                                                        
Plan hash value: 3574103611                                                                                                       
                                                                                                                                  
----------------------------------------------------------------------------                                                      
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |                                                      
----------------------------------------------------------------------------                                                      
|   0 | SELECT STATEMENT  |        |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
|   1 |  TABLE ACCESS FULL| GALAXY |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
----------------------------------------------------------------------------                                                      


Statistics
----------------------------------------------------------                                                                        
          0  recursive calls                                                                                                      
          0  db block gets                                                                                                        
      80459  consistent gets                                                                                                      
       7616  physical reads                                                                                                       
          0  redo size                                                                                                            
   35857680  bytes sent via SQL*Net to client                                                                                     
     809519  bytes received via SQL*Net from client                                                                               
      73335  SQL*Net roundtrips to/from client                                                                                    
          0  sorts (memory)                                                                                                       
          0  sorts (disk)                                                                                                         
    1100000  rows processed                                                                                                       

Now see what happens when we give REGEXP_SUBSTR a little help from the humble INSTR (the “hybrid” query) :

select id, 
    regexp_substr(episode, '[^|]+') as "A New Hope",
    decode( instr(episode, '|'), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,1) + 1)) as "The Empire Strikes Back",
    decode( instr(episode, '|',1, 2), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,2) + 1)) as "Return of the Jedi",
    decode( instr(episode, '|',1, 3), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,3) + 1)) as "The Phantom Menace",
    decode( instr(episode, '|',1, 4), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,4) + 1)) as "Attack of the Clones",
    decode( instr(episode, '|',1, 5), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,5) + 1)) as "Revenge of the Sith",
    decode( instr(episode, '|',1, 6), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,6) + 1)) as "The Force Awakens",
    decode( instr(episode, '|',1, 7), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,7) + 1)) as "Rogue One",
    decode( instr(episode, '|',1, 8), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,8) + 1)) as "Solo",
    decode( instr(episode, '|',1, 9), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,9) + 1)) as "The Last Jedi",
    decode( instr(episode, '|',1, 10), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,10) + 1)) as "The Rise of Skywalker"
from galaxy
/

Yes, I have cheated a bit on the aesthetics here by using the more compact DECODE instead of CASE.
However, this does not affect the runtime of the query, which is a bit faster than the pure REGEXP_SUBSTR equivalent :

1100000 rows selected.

Elapsed: 00:00:30.83

Execution Plan
----------------------------------------------------------                                                                        
Plan hash value: 3574103611                                                                                                       
                                                                                                                                  
----------------------------------------------------------------------------                                                      
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |                                                      
----------------------------------------------------------------------------                                                      
|   0 | SELECT STATEMENT  |        |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
|   1 |  TABLE ACCESS FULL| GALAXY |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
----------------------------------------------------------------------------                                                      


Statistics
----------------------------------------------------------                                                                        
          0  recursive calls                                                                                                      
          0  db block gets                                                                                                        
      80459  consistent gets                                                                                                      
       7616  physical reads                                                                                                       
          0  redo size                                                                                                            
   35857680  bytes sent via SQL*Net to client                                                                                     
     810158  bytes received via SQL*Net from client                                                                               
      73335  SQL*Net roundtrips to/from client                                                                                    
          0  sorts (memory)                                                                                                       
          0  sorts (disk)                                                                                                         
    1100000  rows processed                                                                                                       


Test 1 Results Summary Query MethodTime (secs)SUBSTR ( Standard)20.32REGEXP_SUBSTR with INSTR (Hybrid)30.83SUBSTR with in-line function41.76REGEXP_SUBSTR ( Regexp)87.25

The performance of the Hybrid approach does raise the question of how the REGEXP_SUBSTR compares when we’re just looking to extract a single field from a string, rather than all of them…

Test 2 – Extract the first field only

In this instance we’re just looking for the first field in EPISODES.

In this context, the hybrid approach doesn’t apply because we’re always starting our search at the start of the input string.

Starting, once again with the Standard approach :

select id, 
    substr
    (
        episode, 1, 
        case 
            when instr(episode, '|', 1,1) > 0 then instr(episode, '|', 1,1) -1 
            else length(episode) 
        end 
    ) as "A New Hope"
from galaxy
/
1100000 rows selected.

Elapsed: 00:00:05.33

Execution Plan
----------------------------------------------------------                                                                        
Plan hash value: 3574103611                                                                                                       
                                                                                                                                  
----------------------------------------------------------------------------                                                      
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |                                                      
----------------------------------------------------------------------------                                                      
|   0 | SELECT STATEMENT  |        |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
|   1 |  TABLE ACCESS FULL| GALAXY |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
----------------------------------------------------------------------------                                                      


Statistics
----------------------------------------------------------                                                                        
          0  recursive calls                                                                                                      
          0  db block gets                                                                                                        
      80459  consistent gets                                                                                                      
       7616  physical reads                                                                                                       
          0  redo size                                                                                                            
   25086067  bytes sent via SQL*Net to client                                                                                     
     808790  bytes received via SQL*Net from client                                                                               
      73335  SQL*Net roundtrips to/from client                                                                                    
          0  sorts (memory)                                                                                                       
          0  sorts (disk)                                                                                                         
    1100000  rows processed                                                                                                       

Now for REGEXP_SUBSTR :

select id,
    regexp_substr( episode, '[^|]+') as "A New Hope"
from galaxy
/

1100000 rows selected.

Elapsed: 00:00:06.38

Execution Plan
----------------------------------------------------------                                                                        
Plan hash value: 3574103611                                                                                                       
                                                                                                                                  
----------------------------------------------------------------------------                                                      
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |                                                      
----------------------------------------------------------------------------                                                      
|   0 | SELECT STATEMENT  |        |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
|   1 |  TABLE ACCESS FULL| GALAXY |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
----------------------------------------------------------------------------                                                      


Statistics
----------------------------------------------------------                                                                        
          0  recursive calls                                                                                                      
          0  db block gets                                                                                                        
      80459  consistent gets                                                                                                      
       7616  physical reads                                                                                                       
          0  redo size                                                                                                            
   25086067  bytes sent via SQL*Net to client                                                                                     
     808868  bytes received via SQL*Net from client                                                                               
      73335  SQL*Net roundtrips to/from client                                                                                    
          0  sorts (memory)                                                                                                       
          0  sorts (disk)                                                                                                         
    1100000  rows processed                                                                                                       

Interestingly, whilst it’s still not as fast, the Regexp runtime is only 20% longer than the Standard.

This contrasts markedly with the 430% longer that the Regexp took for our first test.

Test 2 Results Summary Query MethodTime (secs)Standard5.33Regexp6.38

Does this relative performance hold true for any single field in the input string ?

Test 3 – Extract the last field only

Starting with the Standard query :

select 
    id, 
    case when instr(episode, '|', 1, 10) > 0 then
        substr( episode, instr(episode, '|', 1, 10) + 1,
            case when instr(episode, '|', 1, 11) > 0 then instr(episode, '|',1, 11) - 1 else length(episode) end 
            -
            instr(episode, '|', 1, 10))
    end as "The Rise of Skywalker"                
from galaxy
/
1100000 rows selected.

Elapsed: 00:00:05.44

Execution Plan
----------------------------------------------------------                                                                        
Plan hash value: 3574103611                                                                                                       
                                                                                                                                  
----------------------------------------------------------------------------                                                      
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |                                                      
----------------------------------------------------------------------------                                                      
|   0 | SELECT STATEMENT  |        |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
|   1 |  TABLE ACCESS FULL| GALAXY |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
----------------------------------------------------------------------------                                                      


Statistics
----------------------------------------------------------                                                                        
          0  recursive calls                                                                                                      
          0  db block gets                                                                                                        
      80459  consistent gets                                                                                                      
       7616  physical reads                                                                                                       
          0  redo size                                                                                                            
   25686033  bytes sent via SQL*Net to client                                                                                     
     808915  bytes received via SQL*Net from client                                                                               
      73335  SQL*Net roundtrips to/from client                                                                                    
          0  sorts (memory)                                                                                                       
          0  sorts (disk)                                                                                                         
    1100000  rows processed                                                                                                       

Once again, the Regexp Query is much slower…

select 
    id,
    regexp_substr(episode, '[^|]+',1, 11) as "The Rise of Skywalker"
from galaxy 
/

1100000 rows selected.

Elapsed: 00:00:16.16

Execution Plan
----------------------------------------------------------                                                                        
Plan hash value: 3574103611                                                                                                       
                                                                                                                                  
----------------------------------------------------------------------------                                                      
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |                                                      
----------------------------------------------------------------------------                                                      
|   0 | SELECT STATEMENT  |        |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
|   1 |  TABLE ACCESS FULL| GALAXY |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
----------------------------------------------------------------------------                                                      


Statistics
----------------------------------------------------------                                                                        
          0  recursive calls                                                                                                      
          0  db block gets                                                                                                        
      80459  consistent gets                                                                                                      
       7616  physical reads                                                                                                       
          0  redo size                                                                                                            
   25686033  bytes sent via SQL*Net to client                                                                                     
     808888  bytes received via SQL*Net from client                                                                               
      73335  SQL*Net roundtrips to/from client                                                                                    
          0  sorts (memory)                                                                                                       
          0  sorts (disk)                                                                                                         
    1100000  rows processed                                                                                                       

…unless we use the hybrid approach…

select 
    id,
    decode( instr(episode, '|',1, 10), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,10) + 1)) as "The Rise of Skywalker"
from galaxy 
/
1100000 rows selected.

Elapsed: 00:00:05.60

Execution Plan
----------------------------------------------------------                                                                        
Plan hash value: 3574103611                                                                                                       
                                                                                                                                  
----------------------------------------------------------------------------                                                      
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |                                                      
----------------------------------------------------------------------------                                                      
|   0 | SELECT STATEMENT  |        |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
|   1 |  TABLE ACCESS FULL| GALAXY |  1100K|    47M|  2089   (1)| 00:00:01 |                                                      
----------------------------------------------------------------------------                                                      


Statistics
----------------------------------------------------------                                                                        
          0  recursive calls                                                                                                      
          0  db block gets                                                                                                        
      80459  consistent gets                                                                                                      
       7616  physical reads                                                                                                       
          0  redo size                                                                                                            
   25686033  bytes sent via SQL*Net to client                                                                                     
     808736  bytes received via SQL*Net from client                                                                               
      73335  SQL*Net roundtrips to/from client                                                                                    
          0  sorts (memory)                                                                                                       
          0  sorts (disk)                                                                                                         
    1100000  rows processed                                                                                                       
Test 3 Results Summary Query MethodTime (secs)Standard 5.44Hybrid 5.60Regexp16.16 Conclusions

These tests seem to bear out the fact that, in general, SUBSTR offers better performance than REGEXP_SUBSTR.
However, the performance of REGEXP_SUBSTR can be greatly improved if used in conjunction with INSTR.
As always, the decision as to which function to use will depend on the specific circumstances you find yourself in.

Sql Konferenz 2023 in Hanau, Germany

Yann Neuhaus - Sun, 2023-09-24 17:39
Introduction

It was a very great pleasure to participate to the Sql Server konferenz 2023 in Hanau, Germany, the 12 and 13 September 2023.

I went with my colleague Petar, of the same Microsoft Sql Server team by dbi services, Switzerland.

For Microsoft Sql Server fans, this event was really the right place to be.

Where is located Hanau

First of all, for those who don’t know where is Hanau, it’s a small and charming city in Germany, 350 km from Basel, Switzerland (3 hours 30min drive), or 30 km from Frankfurt, Germany.

The event took place in the Congress Park Hanau

1st step Registration

Once arrived, the first step is as usual to go the Registration desk to get the badge, a blue bag containing some useful furnitures (pen, notepad, program’s…)

Take a look on the agenda here, there are 6 parallel sessions of 1 hour each time, all very interesting and very varied in different themes (Sql Server on premise, Azure, Power BI, Best practices…), from 9am. to 5pm.

Let’s start now

It was very intense, time passed very quickly each time, you have many choices which session to attend, real dilemma each time.

During the keynote the first day, all speakers, partners or sponsors introduced shortly what they will present.

Some examples of great sessions or speakers, I cannot enumerate all them of course as all were great, and there were a lot, to make it simple I just show here some examples of Sql Server MVP’s speakers, who are evangelists for us as we use their experience in our daily life in Sql Server:

1. Bob Ward – Principal Architect, Microsoft Azure Data.

For all new versions, new features …, he is among the first who is aware and reveal them officially.

Presenting the topic: Transitioning your On-Prem Database Skills to Azure SQL

2. Rie Merrit – Sr Program Manager, MVP & SQL Community for Microsoft, Organizer

Here with Rie Merritt and my colleague Petar Prljevic

3. Glenn Berry – Services Database Engineer, Microsoft Data Platform MVP

Presenting the topic: How to Survive a SQL Server Health Check

He presented some of his Diagnostic queries which are very used and helpful for all DBA’s like me in our daily duties at custom

Conclusion

It was really a great event on Microsoft Sql Server, in my opinion too short because there were so many interesting topics.

I recommend to everyone for those who want to be up to date, or to complete their knowledge

L’article Sql Konferenz 2023 in Hanau, Germany est apparu en premier sur dbi Blog.

Restoring a PDB from another CDB can increase your SCN

Hemant K Chitale - Sat, 2023-09-23 22:08

 I start with a 19c CDB called "ORCLCDB" and PDB "ORCLPDB1" that is present in it.


SQL> col name format a16
SQL> select dbid, name, current_scn from v$database;

      DBID NAME             CURRENT_SCN
---------- ---------------- -----------
2778483057 ORCLCDB             20906515

SQL> select dbid, guid, name, open_mode from v$pdbs;

      DBID GUID                             NAME             OPEN_MODE
---------- -------------------------------- ---------------- ----------
2054948555 88129263B99F4BBDE0530100007F7BDF PDB$SEED         READ ONLY
4057974068 8812E29864F47615E0530100007FA424 ORCLPDB1         READ WRITE
 181927604 C07D1F2BA98C23D0E0530100007F7D34 RMANCAT          MOUNTED

SQL>
SQL> alter pluggable database orclpdb1 close;

Pluggable database altered.

SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0
oracle19c>rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Sun Sep 24 09:39:55 2023
Version 19.12.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCLCDB (DBID=2778483057)

RMAN> backup pluggable database orclpdb1 tag for_migration;

Starting backup at 24-SEP-23
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00034 name=/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_tpcctab_jyl0mzsp_.dbf
input datafile file number=00009 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/system01.dbf
input datafile file number=00026 name=/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_data_min_j2p8z0qn_.dbf
input datafile file number=00032 name=/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoojqn_.dbf
channel ORA_DISK_1: starting piece 1 at 24-SEP-23
channel ORA_DISK_2: starting full datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00011 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/undotbs01.dbf
input datafile file number=00010 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/sysaux01.dbf
input datafile file number=00012 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/users01.dbf
input datafile file number=00031 name=/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoo5nr_.dbf
channel ORA_DISK_2: starting piece 1 at 24-SEP-23
channel ORA_DISK_1: finished piece 1 at 24-SEP-23
piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2023_09_24/o1_mf_nnndf_FOR_MIGRATION_ljz5825q_.bkp tag=FOR_MIGRATION comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_2: finished piece 1 at 24-SEP-23
piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2023_09_24/o1_mf_nnndf_FOR_MIGRATION_ljz5828m_.bkp tag=FOR_MIGRATION comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:03
Finished backup at 24-SEP-23

Starting Control File and SPFILE Autobackup at 24-SEP-23
piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2023_09_24/o1_mf_s_1148377605_ljz585bw_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 24-SEP-23

RMAN>
RMAN> quit


Recovery Manager complete.
oracle19c>sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Sun Sep 24 09:48:04 2023
Version 19.12.0.0.0

Copyright (c) 1982, 2021, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0

SQL> alter pluggable database orclpdb1 unplug into '/opt/oracle/product/19c/dbhome_1/orclpdb1.xml';

Pluggable database altered.

SQL> !ls -l /opt/oracle/product/19c/dbhome_1/orclpdb1.xml
-rw-r--r--. 1 oracle oinstall 12583 Sep 24 09:50 /opt/oracle/product/19c/dbhome_1/orclpdb1.xml

SQL>
SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0
oracle19c>


Now I go to the target CDB CDB2 and identify that there is no conflicting PDB.  I also find that it has a lower SCN.

SQL> col name format a16
SQL> select dbid, name, current_scn from v$database;

      DBID NAME             CURRENT_SCN
---------- ---------------- -----------
 837065713 CDB2                20664227

SQL> select dbid, name, current_scn from v$database;

      DBID NAME             CURRENT_SCN
---------- ---------------- -----------
 837065713 CDB2                20664231

SQL>
SQL> select * from v$pdbs
  2  /

    CON_ID       DBID    CON_UID GUID
---------- ---------- ---------- --------------------------------
NAME
--------------------------------------------------------------------------------------------------------------------------------
OPEN_MODE  RES OPEN_TIME                                                                   CREATE_SCN TOTAL_SIZE BLOCK_SIZE RECOVERY
---------- --- --------------------------------------------------------------------------- ---------- ---------- ---------- --------
SNAPSHOT_PARENT_CON_ID APP APP APP APPLICATION_ROOT_CON_ID APP PRO LOCAL_UNDO   UNDO_SCN UNDO_TIME CREATION_ DIAGNOSTICS_SIZE
---------------------- --- --- --- ----------------------- --- --- ---------- ---------- --------- --------- ----------------
 PDB_COUNT AUDIT_FILES_SIZE   MAX_SIZE MAX_DIAGNOSTICS_SIZE MAX_AUDIT_SIZE LAST_CHANGE TEM
---------- ---------------- ---------- -------------------- -------------- ----------- ---
TENANT_ID
------------------------------------------------------------------------------------------------------------------------------------
UPGRADE_LEVEL GUID_BASE64
------------- ------------------------------
         2 2054948555 2054948555 88129263B99F4BBDE0530100007F7BDF
PDB$SEED
READ ONLY  NO  24-SEP-23 09.26.31.678 AM +08:00                                               1997190  957349888       8192 ENABLED
                       NO  NO  NO                          NO  NO           1        280           04-MAY-19                0
         0                0          0                    0              0 COMMON USER NO

            1 iBKSY7mfS73gUwEAAH973wA=

         3 2205532678 2205532678 E6BD9C73839C10BEE0550A00275FC834
PDBTMP
MOUNTED                                                                                      17541716  999292928       8192 ENABLED
                       NO  NO  NO                          NO  NO           1        280           21-AUG-22                0
         0                0          0                    0              0 COMMON USER NO

            1 5r2cc4OcEL7gVQoAJ1/INAA=


I then have the backups and the "closed" datafiles from the source CDB copied over to the target.  (The CREATE PLUGGABLE DATABASE command will not be using the RMAN Backups but the PDB Describle XML file and the "closed" datafiles).

oracle19c>pwd
/tmp/From_Source
oracle19c>ls -l
total 1882384
-rw-r-----. 1 oracle oinstall 1146109952 Sep 24 09:56 o1_mf_nnndf_FOR_MIGRATION_ljz5825q_.bkp
-rw-r-----. 1 oracle oinstall  758202368 Sep 24 09:56 o1_mf_nnndf_FOR_MIGRATION_ljz5828m_.bkp
-rw-r-----. 1 oracle oinstall   23232512 Sep 24 09:55 o1_mf_s_1148377605_ljz585bw_.bkp
-rw-r--r--. 1 oracle oinstall      12583 Sep 24 09:59 orclpdb1.xml
oracle19c>


I now prepare to copy the PDB to the target CDB2.. First I check for compatibility.. And then I plug in the PDB with a new name "ORCLPDB1_NEW"

SQL> select dbid, name, current_scn from v$database;

      DBID NAME             CURRENT_SCN
---------- ---------------- -----------
 837065713 CDB2                20664231
SQL> set serveroutput on
SQL> declare
  2   compatible constant varchar2(3) :=
  3     case dbms_pdb.check_plug_compatibility(
  4       pdb_descr_file=>'/tmp/From_Source/orclpdb1.xml',
  5       pdb_name=>'ORCLPDB1')
  6     when true then 'YES'
  7     else 'NO'
  8  END;
  9  begin
 10    dbms_output.put_line(compatible);
 11  end;
 12  /
YES

PL/SQL procedure successfully completed.

SQL>
SQL> show parameter db_create

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_create_file_dest                  string      /opt/oracle/oradata

SQL>
SQL> create pluggable database orclpdb1_new
  2  using '/tmp/From_Source/orclpdb1.xml'
  3  copy;  --- copy will copy the datafiles to the location based on db_create_file_dest and CDB database name

Pluggable database created.

SQL>
SQL> select dbid, name, current_scn from v$database;

      DBID NAME             CURRENT_SCN
---------- ---------------- -----------
 837065713 CDB2                20910195

SQL>
SQL> select dbid, guid, name, open_mode from v$pdbs;

      DBID GUID                             NAME             OPEN_MODE
---------- -------------------------------- ---------------- ----------
2054948555 88129263B99F4BBDE0530100007F7BDF PDB$SEED         READ ONLY
2205532678 E6BD9C73839C10BEE0550A00275FC834 PDBTMP           MOUNTED
4057974068 8812E29864F47615E0530100007FA424 ORCLPDB1_NEW     READ WRITE

SQL>


The SCN of the target CDB2 has now been set to the much higher 20910195 (that was 20664231 before I "plugged in" ORCLDB1 as ORCLDB1_NEW).  This is because it "read" the SCN from the headers of the datafiles that were plugged in.


I can go back to the source and drop the PDB.

SQL> select dbid, name, current_scn from v$database;

      DBID NAME             CURRENT_SCN
---------- ---------------- -----------
2778483057 ORCLCDB             20910076

SQL> select dbid, guid, name, open_mode from v$pdbs;

      DBID GUID                             NAME             OPEN_MODE
---------- -------------------------------- ---------------- ----------
2054948555 88129263B99F4BBDE0530100007F7BDF PDB$SEED         READ ONLY
4057974068 8812E29864F47615E0530100007FA424 ORCLPDB1         MOUNTED
 181927604 C07D1F2BA98C23D0E0530100007F7D34 RMANCAT          MOUNTED

SQL> 
SQL> drop pluggable database orclpdb1 including datafiles;

Pluggable database dropped.

SQL> 
SQL> select dbid, guid, name, open_mode from v$pdbs;

      DBID GUID                             NAME             OPEN_MODE
---------- -------------------------------- ---------------- ----------
2054948555 88129263B99F4BBDE0530100007F7BDF PDB$SEED         READ ONLY
 181927604 C07D1F2BA98C23D0E0530100007F7D34 RMANCAT          MOUNTED

SQL>
SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0
oracle19c>


So, I have "migrated" ORCLPDB1 from ORCLCDB to CDB2 and, in the process, have found that the SCN in CDB2 got reset to a much higher value, because it "inherited" the higher Checkpoint SCN that was in the datafiles of ORCLPDB1.

I can verify that all the datafiles (that are OPEN) in CDB2 can get stamped with the new SCN.

SQL> select con_id, file#, plugin_change#, checkpoint_change#
  2  from v$datafile
  3  order by 1,2
  4  /

    CON_ID      FILE# PLUGIN_CHANGE# CHECKPOINT_CHANGE#
---------- ---------- -------------- ------------------
         1          1              0           20658011
         1          3              0           20658011
         1          4              0           20658011
         1          7              0           20658011
         2          5              0           13768423
         2          6              0           13768423
         2          8              0           13768423
         3         58              0           20657657
         3         59              0           20657657
         3         60              0           20657657
         4         69              0           20908595
         4         70              0           20908595
         4         71              0           20908595
         4         72              0           20908595
         4         73              0           20908595
         4         74              0           20908595
         4         75              0           20908595
         4         76              0           20908595

18 rows selected.

SQL> alter system checkpoint;

System altered.

SQL> select con_id, file#, plugin_change#, checkpoint_change#
  2  from v$datafile
  3  order by 1,2
  4  /

    CON_ID      FILE# PLUGIN_CHANGE# CHECKPOINT_CHANGE#
---------- ---------- -------------- ------------------
         1          1              0           20912767
         1          3              0           20912767
         1          4              0           20912767
         1          7              0           20912767
         2          5              0           13768423
         2          6              0           13768423
         2          8              0           13768423
         3         58              0           20657657
         3         59              0           20657657
         3         60              0           20657657
         4         69              0           20912767
         4         70              0           20912767
         4         71              0           20912767
         4         72              0           20912767
         4         73              0           20912767
         4         74              0           20912767
         4         75              0           20912767
         4         76              0           20912767

18 rows selected.

SQL> col name format a16
SQL> select con_id, name, open_mode from v$pdbs;

    CON_ID NAME             OPEN_MODE
---------- ---------------- ----------
         2 PDB$SEED         READ ONLY
         3 PDBTMP           MOUNTED
         4 ORCLPDB1_NEW     READ WRITE

SQL> alter pluggable database pdbtmp open;

Pluggable database altered.

SQL> select con_id, file#, plugin_change#, checkpoint_change#
  2  from v$datafile
  3  order by 1,2
  4  /

    CON_ID      FILE# PLUGIN_CHANGE# CHECKPOINT_CHANGE#
---------- ---------- -------------- ------------------
         1          1              0           20912767
         1          3              0           20912767
         1          4              0           20912767
         1          7              0           20912767
         2          5              0           13768423
         2          6              0           13768423
         2          8              0           13768423
         3         58              0           20912888
         3         59              0           20912888
         3         60              0           20912888
         4         69              0           20912767
         4         70              0           20912767
         4         71              0           20912767
         4         72              0           20912767
         4         73              0           20912767
         4         74              0           20912767
         4         75              0           20912767
         4         76              0           20912767

18 rows selected.

SQL> alter system checkpoint;

System altered.

SQL> select con_id, file#, plugin_change#, checkpoint_change#
  2  from v$datafile
  3  order by 1,2
  4  /

    CON_ID      FILE# PLUGIN_CHANGE# CHECKPOINT_CHANGE#
---------- ---------- -------------- ------------------
         1          1              0           20912967
         1          3              0           20912967
         1          4              0           20912967
         1          7              0           20912967
         2          5              0           13768423
         2          6              0           13768423
         2          8              0           13768423
         3         58              0           20912967
         3         59              0           20912967
         3         60              0           20912967
         4         69              0           20912967
         4         70              0           20912967
         4         71              0           20912967
         4         72              0           20912967
         4         73              0           20912967
         4         74              0           20912967
         4         75              0           20912967
         4         76              0           20912967

18 rows selected.

SQL>
SQL> alter session set container=ORCLPDB1_NEW;

Session altered.
SQL> col name format a54
SQL> select name from v$datafile;

NAME
------------------------------------------------------
/opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA
424/datafile/o1_mf_system_ljz7d02h_.dbf

/opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA
424/datafile/o1_mf_sysaux_ljz7d02l_.dbf

/opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA
424/datafile/o1_mf_undotbs1_ljz7d02m_.dbf

/opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA
424/datafile/o1_mf_users_ljz7d02o_.dbf

/opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA
424/datafile/o1_mf_data_min_ljz7d02p_.dbf

/opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA
424/datafile/o1_mf_hemant_d_ljz7d02r_.dbf

/opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA
424/datafile/o1_mf_hemant_d_ljz7d02s_.dbf

/opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA
424/datafile/o1_mf_tpcctab_ljz7d02t_.dbf


8 rows selected.

SQL>



CON_ID=1 is Root and the datafiles get the new SCN after a Checkpoint.   Note these datafiles still had the "older" SCN 20658011 before I issued the Checkpoint.
CON_ID=2  is the SEED PDB so it does not get updated.
CON_ID=3 is PDTMP  at 20657657 and is stamped to 2091288 after I OPEN it.
CON_ID=4 is the ORCLPDB1_NEW that I plugged in today.
All the non-SEED datafiles are stamped again (to 20912967) after another Checkpoint.

Categories: DBA Blogs

Oracle Forensics Response

Pete Finnigan - Fri, 2023-09-22 15:06
I have spoken a few times on this blog about forensics and Oracle and in 2021 I did a talk at the UKOUG about Oracle forensics. I have just posted the slides from that talk just now to our site....[Read More]

Posted by Pete On 22/09/23 At 01:07 PM

Categories: Security Blogs

ODA X10: the new Oracle Database Appliances are already there

Yann Neuhaus - Thu, 2023-09-21 15:39
Introduction

ODA X8-2 series was sold for 3 years, and I would have expected X9-2 series to be on the market for 2-3 years, but I was wrong. ODA X10 series has just been announced. Maybe Oracle has a different strategy for appliances now, I will explain my thought later.

You will notice that Oracle removed the number 2 in the name of this new ODA series. X9-2S becomes X10-S, X9-2L becomes X10-L and X9-2HA becomes X10-HA. The number 2 makes less sense today: it reffered to the number of sockets/processors. But X10-S only has 1 processor, X10-L has 2, and X10-HA has 4 (2x 2). For sure, the ODA nodes are still 2-socket servers, but this number was quite confusing for some people.

Now let’s try to find out what’s new, what’s good and what’s not so good.

What is an Oracle Database Appliance?

ODA, or Oracle Database Appliance, is an engineered system from Oracle. Basically, it’s an x86-64 server with a dedicate software distribution including Linux, Oracle database software, a Command Line Interface (CLI), a Browser User Interface (BUI) and a virtualization layer. The goal being to simplify database lifecycle and maximize performance. Market positioning for ODA is quite tricky for Oracle: it sits between OCI (the Oracle public Cloud) and Exadata (the highest level database hardware – a kind of ODA on steroids). For some people, ODA avoids migrating to OCI bringing enough simplication for Oracle Database environments, for some other people, ODA is powerfull enough for most workloads and Exadata wouldn’t make sense. For me, ODA has always been one of my preferred solutions, and undoubtly a solution to consider.

For Enterprise Edition users, ODA has a strong advantage over its competitors: capacity on demand licensing. You can enable as few as 2 cores per node, meaning a single processor licence. You can later scale up by enabling additional cores according to your needs.

From Intel Xeon to AMD Epyc

ODA X9-2 was available in 3 flavors:

  • X9-2S with 1 Xeon CPU, 256GB of RAM and 2x 6.8TB NVMe SSDs
  • X9-2L with 2 Xeon CPUs, 512GB of RAM and 2x up to 12x 6.8TB NVMe SSDs
  • X9-2HA with 2 nodes (similar to X9-2L without the disks) and one or two disk enclosures with various configurations (SSDs or spinning disks)

The first change you will notice is the switch from Intel Xeon to AMD Epyc CPU. Oracle already made this change on Exadata platform several months ago. It’s much more a global change from Oracle, moving to AMD on x86-64 servers’ lineup, including those servers in OCI.

X9-2 series ODAs were using Xeon Silver 4314 with 16 cores running between 2.4GHz and 3.4GHz, X10 series ODAs use AMD Epyc 9334 with 32 cores running between 2.7GHz and 3.9 GHz. The number of cores is not so important because most of the customers using ODAs are running Standard Edition or are using Enterprise Edition with a limited number of cores. But regarding pure CPU speed, a significant increase is expected, something like 30% for single thread processing.

Having better CPU doesn’t only mean better performance: it also means less licences for Enterprise Edition users. Having 30% faster CPUs means you need 30% less cores to do the same tasks, if you needed 6 Xeon cores for your workload, you may run the same workload on 4 Epyc cores, so on 2 processor licences instead of 3.

Internal SSDs: back to 480GB disks

It’s was one of my complain for X9-2 series: the 2x M2 internal disks were smaller than those on X8-2, from 480GB to 240GB. Now for X10 series, Oracle brings back 480GB internal disks, and it’s better having this size.

RAM: same sizes but expandable up to 3 times

X10-S has 256GB of RAM, like X9-2S, but you can now triple the size with additional memory modules (768GB). X10-L comes with 512GB in its basic configuration, and you can go up to 1.5TB. X10-HA is basically 2x X10-L with an external storage enclosure, you then have 2x512GB, 2x1TB or 2×1.5TB configurations.

Data disks: same size but less disks for X10-L

On X10-S and L, data disks are the same as X9-2 series: 6.8TB NVMe disks. X10-S has the same limitation as X9-2S, only 2 disks and no possible extension.

There is a catch regarding X10-L. On X9-2L you could have a maximum of 12 disks, meaning a huge 81TB of RAW storage. X10-L is now limited to 8 disks, meaning only 54TB of RAW storage. One could argue that it’s already a big capacity, but this is RAW TB, usable TB is 21TB only when using normal redundancy. For some of my customers having fully populated X8-2M ODAs, their databases won’t fit anymore in one X10-L… Another drawback when it comes to upgrading storage on your X10-L, there is only 4 slots for disks on the front panel. So how can you put 8 disks with only 4 slots? By adding them inside the server, as AIC: Add-In-Card. The disks have a different form factor and you will need to open your server to add these disks, meaning a downtime of your databases. Definitely a loss of flexibility.

X10-HA is not that different compared to X9-2HA, there is still a High Performance (HP) version and a High Capacity (HC) version, the first one being composed of SSDs only, the second one being composed of a mix of SSDs and HDDs. SSDs are still 7.68TB each, and only the HC get a capacity bump thanks to bigger HDDs: from 18TB to 22TB each.

Network interfaces

Nothing new regarding network interfaces. You can have up to 3 of them (2 are optional), and you will choose for each between a quad-port 10GBase-T (copper) or a two-port 10/25GbE (SFP28). You should know that SFP28 won’t connect to 1Gbps fiber network.

Specific software bundle

Latest software bundle for ODA is 19.20, but Oracle just releases a dedicated version of the patch for X10: 19.20.0.1. Next patch bundle will probably be compatible with X10 as well as older versions. Currently, X7-2 series, X8-2 series and X9-2 series are still supported.

What about editions, licences and support?

First of all, these new ODAs don’t support Standard Edition 2 anymore! It’s a major breakthrough, as a lot of customers are using Standard Edition 2 on these appliances.

It’s cristal clear in the documentation:

Choice of Oracle Database Software:
– Oracle Database 21c Enterprise Edition (DB Systems only)
– Oracle Database 19c Enterprise Edition

My thought is that Oracle will keep X9-2 series for now, and will reserve X10 and AMD Epyc for Enterprise Edition users. X9-2 series is still relevant today, as price and performance match actual needs for most Standard Edition customers.

You may know that ODA is not sold with the database licences: you need to bring yours or buy them at the same time. You will need Enterprise Edition for these new ODAs, starting from 1 processor licence on a S and L models (2 activated cores) and at least 2 processor licences on HA (2 activated cores per node). Enabling your EE licence on an ODA will actually decrease the number of cores on the server to make sure you are compliant, but be careful because it doesn’t prevent you using unlicensed options. You can also use CPU pools to keep remaining CPUs available for other purpose, running application VMs for example.

Regarding support, as other hardware vendors you’ll have to pay for your ODA to be supported, in case of hardware failure. 1st year of support will usually be part of your initial order.

Support for the database versions is limited to 19c and 21c. Don’t forget that only 19c databases are supported with Premier Support. 21c being an innovation release, it’s only for testing.

What are the differences between the 3 models?

The X10-S is an entry price point for a small number of small databases.

The X10-L is much more capable and can get disk expansions. A big infrastructure with hundreds of databases can easily fit on several X10-L. OK, disk capacity is far behind previous X9-2L, but most customers won’t put 10 additional disks in their ODA: a fully populated X9-2L was 3 times more expensive than the base model.

The X10-HA is for RAC users because High Availability is included. The disk capacity is much higher than single node models, and HDDs are still an option. With X10-HA, big infrastructures can be consolidated on a very small number of HA ODAs.

ModelDB EditionnodesURAMRAM MaxRAW TBRAW TB Maxbase priceODA X10-SEE only12256GB768GB13.613.621’579$ODA X10-LEE only12512GB1536GB13.654.434’992$ODA X10-HA HPEE only28/122x 512GB2x 1536GB4636897’723$ODA X10-HA HCEE only28/122x 512GB2x 1536GB39079297’723$ODA X10 specs summary And regarding the price?

Looking into the latest engineered systems price list (search exadata price list and you will easily find it), you will find X10 series as well as X9-2 series. Prices for X10-S and X10-L are identical to X9-2S and X9-2L. X10-HA cost 9% more than X9-2HA. X10-HA being quite expensive now (nearly 100’000$), I would definitely compare it to an Exadata Cloud@Customer solution.

Which one should you choose?

If you are using Standard Edition, X10 series is not for you. You will need to order X9-2 series, or keep your older ODAs.

If your databases can comfortably fit in the S model, don’t hesitate as you will probably never need more.

Most interesting model is still the new L, maximum disk capacity is 33% less than its predecessor, but it’s a trade off for something like 33% more performance thanks to AMD Epyc. L is quite affordable, has still a great capacity, and is upgradable if you don’t buy the full system at first.

If you still want/need RAC and its associated complexity, the HA is for you.

Don’t forget that you will need at least 2 ODAs for Disaster Recovery purpose, using Data Guard. No one would recommend buying a single ODA, it’s probably better no ODA at all than a single one.

I would still prefer buying 2x ODA X10-L compared to 1x ODA X10-HA. NVMe speed, no RAC and single box is definitely better. And extreme consolidation may not be the best solution.

Conclusion

ODA X10 series will bring a nice CPU speed improvement thanks to AMD Epyc switch, but customers running Standard Edition are not concerned and are limited to X9-2 series. X10-L is also a little bit disappointing regarding storage possibilities compared to X9-2L. Nevertheless, ODA is still a good platform for database consolidation and simplification. And until now, it’s still very popular among our customers.

L’article ODA X10: the new Oracle Database Appliances are already there est apparu en premier sur dbi Blog.

Declaring environments in GitLab

Yann Neuhaus - Thu, 2023-09-21 10:11

Why do I need to declare environments in GitLab ?

  • To track deployed versions on each environment
  • To control deployment on environments (e.g. add an approval to deploy production)
  • Add environment-related variables

An environment can be linked to a pipeline step. For example, to define the target of a deployment.

Create an environment 1. In the side menu, go to : Operate > Environments 2. Click on “New environment”, set the parameters
  • Name: with the name of your environment
  • External URL: the URL of your environment (use in environments list for the Open button)
  • GitLab agent: An option to define the agent to be used for environment deployment. Can remain blank
3. Environment List

You can define several environments for a single project. They will then be available in the environment list in: Operate > Environments

View Deployments

When an environment has been specified in a deployment step, it is then possible to find out which version is deployed in the environment.

Click on the arrow at the beginning of the line to expand the details panel :

Define CI/CD variables for an environment

It is possible to define environment-specific variables to use in pipelines.

In your project, in the side menu, go to Settings > CI/CD, after that, expand the Variables section :

To add a variable, click on “Add variable” button and set the parameters :

  • Key: the name of your variable
  • Value: the value of the variable
  • Environment scope: defines the environment in which the variable will be available

Note : The same variable (with the same name) can de define for each environment.

In the .gitlab-ci.yml, environment scoped variables can be used similarly like a standard variable. ( For example: ${K8S_NS} )

Protected environments

Protected environments can be used to prevent unintentional deployment or to authorize only certain users to deploy.

In the side menu, go to Settings > CI/CD and expand the Protected environments section :

Protect an environment :

Click on “Protect an environment”, select the environment to protect and set the parameters to define who can deploy and who can approve :

Deploy on protected Environment

When a pipeline reaches the step to deploy on a protected environment, the pipeline will wait for deployment approval

At the same time, the environment deployment wait for approval in the environments list :

Click on “Approval options” button and approve or reject the deployment :

After approval, the pipeline does not restart automatically. Click on “Run” button to continue the deployment:

Conclusion

Environments are very useful in GitLab. However, it’s possible to write pipelines without it. But if you don’t use environments for deployment, you’ll miss out on some really useful features, such as approvals, variables and, by the way, it becomes harder to track deployments.

L’article Declaring environments in GitLab est apparu en premier sur dbi Blog.

Oracle Database 23c Generally Available – Sort Of

Tim Hall - Thu, 2023-09-21 04:53

Oracle Cloud World is happening, which typically means lots of announcements. One of the welcome announcement was the release of Oracle 23c on OCI Oracle Base Database Service, so there is a production version generally available… Sort of… Why do I say “sort of”? OCI Oracle Base Database Service This is a single cloud service, … Continue reading "Oracle Database 23c Generally Available – Sort Of"

The post Oracle Database 23c Generally Available – Sort Of first appeared on The ORACLE-BASE Blog.Oracle Database 23c Generally Available – Sort Of was first posted on September 21, 2023 at 10:53 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

SQL Konferenz 2023 in Hanau – Meet Tame, the data monster

Yann Neuhaus - Tue, 2023-09-19 09:38

After 2 years, or even 3 years? The organizers could not agree,

the SQL Conference took place again in Hanau.

73 speakers have taught us in over 80 sessions with their experience as well as knowledge new. There were incredibly many exciting topics. I would like to highlight the sessions “Migration to the Cloud – What could go wrong? and “Performance improvement of Transact-SQL queries without code changes”.

Migration to the Cloud – What could go wrong?

It was exciting to hear how a simple migration to the cloud can become really difficult due to wrong decisions.

A still lively and complex infrastructure, especially used for reports, had to be accompanied into the clouds without AzureDevOps.

How the loading routes were redefined, the security architecture was adopted but also all the structures were tried to be adopted 1-to-1.

All these experiences were shared with us during the session, as well as always in the breaks through conversations.

Performance improvement of Transact-SQL queries without code changes

Let’s imagine, we are sitting on the couch on a Saturday evening, have just baked a nice pizza and are really up for it, to watch a football game.

Suddenly a customer calls us, the SQL server is completely overloaded and he is looking for help.

Ok, beer and pizza on the side, laptop ON.

It quickly becomes apparent what the overload is. The CPU is at 99% utilization. Start analysis.

We find out that a query is causing the high CPU load and have to look for a solution. The query could definitely be better, a proposal change including reindexing was sent to the customer. The answer is, no idea, we have to clarify first. And as we all know, this can take days.

A lot of days. So we need an adhoc solution.

One thing we have already done well, the QueryStore is already running. Now we have possibilities through stored procedures like sp_query_store_set_hints which allow us to influence queries without changing application code or database.

In summary, it was 2 incredibly learning days. I also liked the community, which made a very pleasant exchange possible.

Of course we must not forget Tame, the DataMonster. Thank you very much for the great support. I would love to come again next time.

L’article SQL Konferenz 2023 in Hanau – Meet Tame, the data monster est apparu en premier sur dbi Blog.

Database Vault without Database Vault

Pete Finnigan - Mon, 2023-09-18 19:46
I did a talk in Slovenia in 2022 that explores the questions, "What is Database Vault?" and "What can we do if we don't have Database Vault?". I have posted the slides to our website today and the talk is....[Read More]

Posted by Pete On 18/09/23 At 01:00 PM

Categories: Security Blogs

Ansible automates roadshow 2023

Yann Neuhaus - Mon, 2023-09-18 01:40

I attended this tour from Red Hat to promote automation at Ansible automates roadshow with my colleague Arnaud Berbier. You’ll find his review of the journey here) in Geneva.

It was the third date from their tour. In case you’re interested in automation with Ansible there is 2 more dates in Germany.

  • Frankfurt in October 17
  • Munich in October 19

It started with Loïc Avenel who is the EMEA Manager in field product management for Ansible. He explained his vision of automation with 3 keys concepts which are to

  • Think like software developers do in modern development
  • Have an automation-first cultural mindset, and this start by us
  • Leverage and share individuals skills and experience, by filling the talent gap and up skilling your team members

Loïc put emphasis in collaboration and gave some “facts” numbers to depict that without this. Many company failed to transform their IT and this number goes up to 70% from a financial analysis from Banque Cantonale de Genève.

He then shows 2 common ways enterprise does this. Some by updating gently their tasks to be automated (optimizing) and the other by changing totally their way to do things (transform). In fact, this is not bad, but doing both is better!

He leverages on a “Automation maturity curve” to explain his way to be more efficient for a successful transformation. And to resume his mind, an extra effort should be done at the beginning but at the end you’ll benefits more. And also more from the upcoming AI that everyone is talking about.

He assumes a lot of company already did the cultural shift to automate tasks, he calls that Day 0 / Day 1 and named this phase “Infrastructure as Code” but not really the next step for automation, he calls it Day 2 “Operation as Code” and consist in

  • Standardize operation processes
  • Go further in observability with remediations

As you may know, Red Hat is embedded in the big blue IBM and this Day 2 couldn’t come without something from them, the famous Watson with their “Watsonx code assistant” for generative AI code for plabook deployment, this magic is called Ansible lightspeed.

He also introduce the Event-Driven in Ansible that I can say is promising with all plugins and integrations. It consists on observability and automatic remediation directly in a playbook triggered on demand by specific events.

The presentation ended with some words about Ansible automation platform (replacing Ansible Tower). They push the concept farther with these extra features.

  • Automation execution environments
  • Automation mesh
  • Ansible content collections
  • Ansible content tools
  • Red Hat Insights
  • Automation services catalog
  • Automation Hub

We then had talks from Bertrand d’Hérouville an Edge Specialist Solution Architect. He presented an extension to automate with Ansible with Edge devices and he took his home automation as an example.

The last talk before the lunch was from a Red Hat partner. The presentation was with Jean-François Pujol from Cisco and how he used Ansible to manage Network devices, another great success story!

Just before noon, it was the time for a very nice lunch buffet. Thanks Red Hat and Partner for this event.

During the lunch, I was thinking about what we saw the morning. It was a lot of theory and not enough concret usage or demos. Fortunately, we had one of the best slide of the day.

Demos will now start!!

Christian Jung from Red Hat did a small poll with hand raised to know if everyone in the room was familiar or not with playbooks. It was like one third people in the room, it surprises me a bit, but well. He started his demo with writing a basic playbook. And after a few minutes, he showed us lightspeed, a plugin for VS Code that analyze what you type and helps you developing faster. Below is an exemple at line 7.

After that, we had another successful use case with Ansible from Sylvain Chen. He explained his success story at Six. In an openShift environment and how he did manage to develop operators and used Ansible Automation Platform to do it.

The last talk was with RegData and the importance to protect data on on-prem or cloud infrastructure. By using a lot of features from Ansible like

  • Generic roles
  • variables management
  • templating

This event was really great to know more about new features around Ansible. With for example lightspeed, Ansible driven-event or the new Ansible tower aka Ansible automation platform.

I recommend you to join the next events in case you would like to know more from Ansible.

Please find the external link here to Ansible automates roadshow 2023.

Thanks again dbi services for this opportunity.

L’article Ansible automates roadshow 2023 est apparu en premier sur dbi Blog.

Oracle to the rescue of MGM with SQL Firewall?

Yann Neuhaus - Sun, 2023-09-17 23:40
Introduction

It is true that it’s a somewhat daring title, and it’s unlikely that SQL firewall has changed anything about the attack that several casinos in Las Vegas suffered last week. However, the opportunity was too good to pass up on this topic.

As I write this blog, Las Vegas casinos are experiencing a series of cyberattacks. It turns out that at the same time, OracleCloud World is taking place in Las Vegas, and one of its flagship products, the database, offers a new feature in its 23c version called ‘SQL firewall.’ So, the opportunity was too good not to write a short blog about this new functionality. I would like to emphasize here that this is only a very brief overview of all the features of DBMS_SQL_FIREWALL. Oracle provides extensive documentation on this package/feature, and you will find several blogs on the internet that go beyond the tests below (have a look in my conclusion)

How does SQL Firewall work?

The principle of the package is very simple and is illustrated through the image below, which has been taken from the Oracle documentation:

SQL Firewall flow – Source: Oracle documentation
  1. First, we capture the standard activity of an application (Capture SQL Activities). Naturally, the capture must run for a sufficient amount of time to cover all specific scenarios.
  2. By examining this activity (Review), we ensure that there is nothing abnormal.
  3. We generate the list (Generate Allow-List) of allowed execution contexts. This list can be modified if necessary by adding or removing specific executions using the following package procedures: DBMS_SQL_FIREWALL.ADD_ALLOWED_CONTEXT, DBMS_SQL_FIREWALL.DELETE_ALLOWED_CONTEXT, and DBMS_SQL_FIREWALL.DELETE_ALLOWED_SQL.
  4. We enable the list (Enable Allow-List).
  5. We monitor (Monitor) the operation using the DBA_SQL_FIREWALL_VIOLATIONS dictionary view.
Example

For this example I will need two users:

  • schema_owner who is the owner of the tables & views of the application
  • app_user who needs to access to the application

We could use a specific user to manage database firewall by providing him the sql_firewall_admin role but as stated in my introduction I’ll keep this example very short and simple.

Users creation

Creation of the user schema_owner:

SQL> create user schema_owner identified by schema_owner quota unlimited on users;
grant connect,resource to schema_owner;
grant create session grant any privilege to schema_owner;
User created.

Creation of the user app_user:

SQL> create user app_user identified by app_user;
grant create session to app_user;
grant select any table on schema schema_owner to app_user;
User created.
Objects creation

Now we will create few objects in our schema_owner schema:

SQL> CREATE TABLE schema_owner.customers
( customer_id number(10) NOT NULL,
customer_name varchar2(50) NOT NULL,
city varchar2(50),
credit_card number(16),
CONSTRAINT customers_pk PRIMARY KEY (customer_id)
);
Table created.

SQL> create view schema_owner.non_sensitive_view as select customer_name, city from schema_owner.customers;

View created.
Data insertion

We will now insert data in our table:

SQL> insert into schema_owner.customers values (001,'steulet','Delemont',00001111222233334444);
insert into schema_owner.customers values (002,'errard','Courrendlin',2222333344445555);
insert into schema_owner.customers values (003,'frund','Porrendutry',7777888899990000)
1 row created.
Start the capture of the statements (Capture SQL Activities)

Before being able to capture the statements, we have to be sure that SQL Firewall is enabled:

SQL> select status
from dba_sql_firewall_status; 
STATUS
------------
DISABLED
SQL> exec dbms_sql_firewall.enable;

PL/SQL procedure successfully completed.
SQL> select status
from dba_sql_firewall_status;
STATUS
------------
Enable

We can now start the capure process of the standard context for the user app_user using the below procedure:

SQL> begin
dbms_sql_firewall.create_capture (
username => 'app_user',
top_level_only => false,
start_capture => true);
end;
 
PL/SQL procedure successfully completed.
Let’s run the application

We now need to run the application for the necessary duration in order to collect all user context information related to those who will be querying the application. We will greatly simplify this phase by simply performing a select on the view we created earlier. This select will be done in a separate session using the application user login & password:

[oracle@grs-oraclecloudinstance bin]$ ./sqlplus app_user/app_user@free

SQL*Plus: Release 23.0.0.0.0 - Developer-Release on Mon Sep 18 03:16:56 2023
Version 23.2.0.0.0Copyright (c) 1982, 2023, Oracle. All rights reserved.Connected to:
Oracle Database 23c Free, Release 23.0.0.0.0 - Developer-Release
Version 23.2.0.0.0
SQL> set linesize 200 pagesize 40
SQL> select * from schema_owner.non_sensitive_view;

CUSTOMER_NAME                                      CITY
-------------------------------------------------- ---------------------------------
steulet                                            Delemont
errard                                             Courrendlin
Let’s stop the capture

We can admit that the application ran for the necessary duration in order to catch all the contextual information and we stop the capture:

SQL> exec dbms_sql_firewall.stop_capture('app_user');
PL/SQL procedure successfully completed.
Let’s have a look on what we captured (Review)

When querying dba_sql_fiewall_capture_logs, one can view the logs as demonstrated below:

SQL> column commande_type format a9
column current_user format a10
column client_program format a20
column os_user format a8
column ip_address format a10
column sql_text format a30
select command_type, current_user, client_program, os_user, ip_address, sql_text from dba_sql_firewall_capture_logs where username='APP_USER';

COMMAND_TYPE CURRENT_US CLIENT_PROGRAM       OS_USER  IP_ADDRESS SQL_TEXT
------------ ---------- -------------------- -------- ---------- ------------------------------
SELECT       APP_USER   sqlplus@grs-oraclecl oracle   10.0.0.79  SELECT * FROM SCHEMA_OWNER.NON
                        oudinstance (TNS V1-                     _SENSITIVE_VIEW
                        V3)

SELECT       APP_USER   sqlplus@grs-oraclecl oracle   Local      SELECT DECODE (USER,:"SYS_B_0"
                        oudinstance (TNS V1-                     ,XS_SYS_CONTEXT (:"SYS_B_1",:"
                        V3)                                      SYS_B_2"),USER) FROM SYS.DUAL

SELECT       APP_USER   sqlplus@grs-oraclecl oracle   10.0.0.79  SELECT DECODE (USER,:"SYS_B_0"
                        oudinstance (TNS V1-                     ,XS_SYS_CONTEXT (:"SYS_B_1",:"
                        V3)                                      SYS_B_2"),USER) FROM SYS.DUAL
  

To read the output more easily you will find a screenshot below:

Output of the SQL command Let’s generate the allow list (Generate Allow-List)

Using DBMS_SQL_FIREWALL.GENERATE_ALLOW_LIST we generate the allow_list:

SQL> exec DBMS_SQL_FIREWALL.GENERATE_ALLOW_LIST ('APP_USER');
PL/SQL procedure successfully completed.
Let’s enable protect our database (Enable Allow-List)

Using the below procedure we will activate the allow list and block violations:

SQL> begin
dbms_sql_firewall.enable_allow_list (
username => 'app_user',
enforce => dbms_sql_firewall.enforce_all,
block => true);
end;
/ 

PL/SQL procedure successfully completed.
What about a test?

Let’s run a query that hasn’t been run before. We get a SQL Firewall violation:

[oracle@grs-oraclecloudinstance bin]$ ./sqlplus app_user/app_user@free

SQL> select credit_card from schema_owner.customers;
select credit_card from schema_owner.customers
*
ERROR at line 1:
ORA-47605: SQL Firewall violation
What can we find in the log?

Inspecting the log we can have some details regarding the rule violation:

SQL> select sql_text,
firewall_action,
ip_address,
cause,
occurred_at
from dba_sql_firewall_violations;
SQL_TEXT                       FIREWAL IP_ADDRESS CAUSE                OCCURRED_AT
------------------------------ ------- ---------- -------------------- -------------
SELECT CREDIT_CARD FROM SCHEMA Blocked 10.0.0.79  SQL violation        18-SEP-23 04.08.54.371122 AM +00:00
_OWNER.CUSTOMERS

To read the output more easily you will find a screenshot below:

Output of the SQL command Conclusion

Oracle provides a series of tools to control data access. SQL Firewall is added to the list of these tools, such as DBM_CRYPTO, TDE, Oracle Virtual Private Database, to name just a few. SQL Firewall is used in this blog to block queries, but it can also be configured for query monitoring only. It’s also interesting to note that we can use Oracle Data Pump to export and import SQL Firewall captures and allow-list metadata. Of course on large volume of Data SQL Firewall can have an impact on performance (I didn’t test it). In such a context Oracle recommends the two below measures:

  • Allocate at least an additional 2G to the LARGE_POOL_SIZE parameter setting, on top of the existing LARGE_POOL_SIZE requirement.
  • Resize the SGA_TARGET parameter setting to include this additional requirement. Ensure that the final size is 8G or more.

Among the different blogs that address this topic we can find:

L’article Oracle to the rescue of MGM with SQL Firewall? est apparu en premier sur dbi Blog.

Clone Any Voice with AI - Locally Install XTTS Model

Pakistan's First Oracle Blog - Sat, 2023-09-16 21:52

 This video shows in step by step tutorial as how to install and run Coqui XTTS model locally. TTS is a Voice generation model that lets you clone voices into different languages by using just a quick 3-second audio clip.




Commands Used:


!pip install transformers !pip install tts from TTS.api import TTS tts = TTS("tts_models/multilingual/multi-dataset/xtts_v1", gpu=True) tts.tts_to_file(text="This is my new cloned voice in AI. If you like, don't forget to subscribe to this channel.", file_path="output.wav", speaker_wav="speaker.wav", language="en")

Categories: DBA Blogs

Create Onion Layers of Security

Pete Finnigan - Fri, 2023-09-15 18:26
I did a talk in 2022 called CreatingOnion Layers of Security and as you can see from the previous link I have posted a pdf of my MS PPT slides to our website. I have also added the talk to....[Read More]

Posted by Pete On 15/09/23 At 02:01 PM

Categories: Security Blogs

Oracle databases on other clouds?

Tim Hall - Fri, 2023-09-15 04:05

Oracle just announced the expansion of their partnership with Microsoft to deliver Oracle database services in Azure. You can read the blog post here. Oracle and Microsoft expand partnership to deliver Oracle database services in Azure This is a very interesting development for a number of reasons. Here are some of my thoughts… The database … Continue reading "Oracle databases on other clouds?"

The post Oracle databases on other clouds? first appeared on The ORACLE-BASE Blog.Oracle databases on other clouds? was first posted on September 15, 2023 at 10:05 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

The Hint OPT_ESTIMATE is 20 years old: can we use it now ?

Tom Kyte - Fri, 2023-09-15 00:06
Hi, I discovered the powerful, not officially documented, hint OPT_ESTIMATE about 12 years ago. Recently I discovered the following officiel document about it: Oracle Support Document 2402821.1 (How To Use Optimizer Hints To Specify Cardinality For Join Operation) My question: is it now a documented hint or should it still not be used in SQL code? Regards Maurice
Categories: DBA Blogs

How to Install Llama 2 on Google Cloud Platform - Step by Step Tutorial

Pakistan's First Oracle Blog - Thu, 2023-09-14 21:42

 This video shows you step by step instructions as how to deploy and run Llama 2 and Code Llama models on GCP in Vertex AI API easily and quickly.



Categories: DBA Blogs

Las Vegas, from sin city to green city?

Yann Neuhaus - Thu, 2023-09-14 16:27
Introduction

Oracle CloudWorld 2023 will start next week in Las Vegas, and I’m happy to participate in this worldwide event. As a Swiss citizen, when I think of Las Vegas, the first image that comes to my mind is that of dazzling lights, opulent casinos, and dancing fountains. A city that never sleeps, where excess seems to be the norm. However, behind this glittering facade lies a surprising reality: Las Vegas appears to be a leader in many sustainable initiatives and a model of responsible water management. It’s perhaps also because of this that a company such as Oracle decided to organize its most famous event in this city.

The well-known Bellagio fountains Environmental Savior or Sinful Excess?

While it won’t be my first visit to Sin City, as Las Vegas hosts several events like CES and is a popular destination for those exploring the West Coast on road trips, each visit brings its own unique experiences. On my previous trips to Las Vegas, what truly captivated my attention was how a city nestled in the heart of the desert could orchestrate mesmerizing water spectacles, such as the renowned ones adorning the famous Las Vegas Strip.

As a company that has dedicated significant efforts to sustainability initiatives and proudly holds Ecovadis certification, I felt compelled to delve deeper into the question of how this desert city manages its energy, and its precious water resources. To craft this article, I had the privilege of engaging with Jace Radke, the Public & Media Relations Supervisor in Las Vegas as well as Mack Branson, who represents both the Southern Nevada Water Authority (SNWA) and the Las Vegas Valley Water District. I thank them for generously sharing their time and expertise, which greatly enriched the content of this article.

Mr. Branson L.Mack – SNWA | LVVWD Public Outreach World-class in terms of sustainability

As stated by Jace Radke, “More than 2.1 million people reside in Southern Nevada and 40 million people from around the world travel to Las Vegas annually. Many know Las Vegas as a world-renowned entertainment and tourist destination, however, very few of them know that Las Vegas is also world-class when it comes to being green and sustainable.” Just to provide few facts and figures that attests the efforts of Las Vegas in terms of sustainability (a more exhaustive list can be found here: www.lasvegasnevada.gov/sustainability) :

  • Certifications such as LEED (Leadership in Energy and Environmental Design) certified. buildings, including Las Vegas City Hall that opened in 2012.
  • Investment in hundreds of miles of bike lanes.
  • Completion of more than six megawatts of solar.
  • Replacement of 42,000 streetlights with LED lighting.
  • Reduced annual water consumption by 256 million gallons from 2008 level.
  • Every public building, park and streetlight is served by renewable energy.
  • Several years ago, the largest rooftop solar array in the country went online at Mandalay Bay, with 26,000 panels covering 28 acres.

Many other initiatives will be undertaken. For example, the city plans to plant 60’000 new trees to help mitigate the heat island impacts in the city and lower temperatures. This initiative is part of the city’s 2050 Master Plan: https://www.lasvegasnevada.gov/Business/Planning-Zoning/Master-Plan

Las Vegas Strip by night

These measures have been awarded. In 2014, the city was honored with the US Conference of Mayors’ Climate Protection Award in Dallas. Such initiatives also led to a reduction in annual energy costs, decreasing from a peak total of $15 million in 2008 to less than $10 million per year.

However, despite these initiatives, Jace Radke remains aware of the ongoing challenges that pose a threat to community well-being: “Many communities, like Las Vegas, are facing issues more heat waves, flooding, or drought – an issue they face together with cities across the Southwest. Of course, water is clearly a big part of our story”.

While everything may not be perfect in Vegas in terms of sustainability, there is still room for improvement. However, when we examine the 2050 Master Plan, it becomes evident that the authorities are well aware of this potential for enhancement.

Water savings of 48’425 Olympic-size swimming pools

Before delving into water management, it’s crucial to establish a rough idea of the proportions we are discussing. As mentioned by Jace Radke and confirmed by Mack Branson, “while many look at Las Vegas’s growth and water use at casinos, golf courses and fountains as the cause of declining water levels in Lake Mead, the fact is that Las Vegas uses less than 4% of Colorado River water. Additionally, it’s important to mention that the resort industry only uses 3% of all water used in Las Vegas.”.

To maintain the focus on proportions, consider that the Southern Nevada region receives 100 millimeters of precipitation per year, while a country like Switzerland receives between 1000 to 1500 millimeters of precipitation per year (according to meteosuisse.admin.ch). Naturally, these conditions have prompted the Southern Nevada Water Authority to implement certain restrictions, such as:

  • Replacing useless grass
  • Limiting water gardens
  • Prohibiting installation of new gras
  • Limiting residential pool sizes
  • Moratorium on evaporative cooling
  • Annual water budgeting for Golf courses

As explained by Mack Branson, some strong measures will be enforced: “by the end of 2026, it will be illegal to irrigate grass which is considered decorative grass. This measure is due to the fact that grass needs approximately 3000 liters of water per square meters (73 Gallon per square foot). The reason of this measure is very simple, water that we use outdoor can be used only one time which is not the case of water used indoor because we can recycle it.

Due to all these measures, Southern Nevada Authorities estimates that: the Southern Nevada’s consumptive water use declined by approximately 32 billion gallons (121.25 trillion liters) between 2002 and 2022, despite the influx of approximately 750,000 new residents and nearly 40 million annual visitors during that span. (Conservation fact sheet – Comprehensive 2023.pdf edited by Southern Nevada Water Authority) which represents approximately 48’425 Olympic size swimming pools.

Lake Mead, a challenge that goes beyond the borders of Las Vegas

As we all witnessed in recent years, the water level of Lake Mead significantly declined. Fortunately, the water level has rebounded in the past months. Nevertheless, to confront the challenge of declining mean water levels, it’s important to take into consideration that Las Vegas uses only 4% of the Colorado River water. Therefore, additional efforts are required beyond Las Vegas and Nevada.

Lake Mead and its dam

As mentioned by Mack Branson “Our community recycles and returns nearly every drop of indoor water use back to Lake Mead, which provides 90 percent of Southern Nevada water supply. Every liter returned allows us to tank another out of the lake, treat it to drinking water standards, and deliver it to the community. We are the only community in the U.S. that we know of that recycles all of its indoor water use.”

However, Mack Branson is aware that the challenges related to Lake Mead do not stop at the borders of Las Vegas.: “It’s going to require every sector in every state that uses Colorado River water to use less. The largest water user of the Colorado River by sector is agriculture, which is beneficial to us in our nation’s food supply. However, a significant portion of that agricultural water use still employs inefficient irrigation techniques and produces a lot of feed for cattle (alfalfa). It appears that alfalfa has become one of the dominant water users in the agricultural sector. While it’s essential to support the dairy industry, we also observe a substantial portion of this alfalfa being exported out of the United States. Thus, we are effectively exporting Colorado River water out of the United States.

Intake 3, a drought insurance

In response to unprecedented drought conditions and for the long-term preservation of Southern Nevada’s primary water storage reservoir, Lake Mead, the Southern Nevada Water Authority built a third drinking water intake system (intake 3) capable of extracting Colorado River water when the lake’s elevation is below 1,000 feet (~304 meters) above sea level.

In 2008, construction of the intake 3 commenced, involving the use of a 24-foot (~7,3 meters) diameter tunnel boring machine to excavate a tunnel beneath Lake Mead. This tunnel was subsequently connected to an intake structure anchored to the lakebed with over 1,000 truckloads of concrete.

Intake 3 of Lake Mead – source Courtesy Southern Nevada Water Authority

Currently, Lake Mead stands at a height of 1,066 feet (325 meters), as indicated by: https://mead.uslakes.info/level.asp. This means it is below the level of Intake Number 1. Consequently, Pumping Stations Number 2 and Number 3 are in operation. In a few months, when the Lake Mead elevation rises above Intake Number 1, Pumping Station 3 will be deactivated to conserve energy. This decision is driven by the greater energy consumption of Pumping Station Number 3, attributable to the depth of Intake Number 3.

Conclusion

The challenges encountered by Las Vegas, and more globally South-West, are challenges that will impact other regions around the world in the future. Behavior adaptations such as the replacement of grass, water recycling techniques, strategies to reduce water evaporation, and environmental certifications for hotels are all practices that are expected to become more widespread in the coming years.

Undoubtedly, Las Vegas is a city with high energy demands, and there is still much work to be done. However, over the past several years, awareness has been steadily increasing, giving rise to promising ideas and a genuine commitment to making meaningful improvements.

Climate change is affecting all of us. We need to adapt to build resilience. Climate change in the Southwestern United States will result in a warmer and drier future and if we do not admit that degrowth can be a solution, people will have to be innovative to face these challenges.

L’article Las Vegas, from sin city to green city? est apparu en premier sur dbi Blog.

UNIX/Linux Time Command : Record elapsed time

Tim Hall - Thu, 2023-09-14 02:21

In a recent post I mentioned using a scratchpad to record everything I do. As part of that process I try to make regular use of the UNIX/Linux “time” command to record elapsed times of long running commands. It’s really simple to use. All you do is put “time” in front of the command and … Continue reading "UNIX/Linux Time Command : Record elapsed time"

The post UNIX/Linux Time Command : Record elapsed time first appeared on The ORACLE-BASE Blog.UNIX/Linux Time Command : Record elapsed time was first posted on September 14, 2023 at 8:21 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Pages

Subscribe to Oracle FAQ aggregator