12.1.0.2 brought JSON support to Oracle Database. This helped you work with JSON documents stored in clobs or varchar2s.
These are fantastic. But storing raw JSON should be the exception, not the norm. Most of the time you should shred your JSON documents into relational tables.
This leaves you with a problem though. Getting the data back out in JSON format!
Trying to write your own JSON generator is hard. So in 12.2 we offer a whole host of options to help you get the job done.
12.2 provides four key functions to help you write SQL that returns data in JSON format:
You use the JSON_object* functions to create series of key-value pair documents. i.e. the output has curly braces {}. The JSON_array* functions take a list of values and return it as an array i.e. in square brackets [].
For each row in the input, the non-agg versions of these functions output a row. The agg versions combine multiple rows into a single document or array.
OK. So how do these work?
Let's look at an example.
Say you're using the classic employees and departments tables. For each department you want a JSON document that contains:
The department name
An array of its employees
Each element of this array should be its own document, listing the employee's name and their job title.
For example:
{ "department": "Accounting", "employees": [ { "name": "Shelley,Higgins", "job": "Accounting Manager" }, { "name": "William,Gietz", "job": "Public Accountant" } ] }
How do you create this using the new functions?
Let's work from the inside out:
select json_object ( 'department' value d.department_name, 'employees' value json_arrayagg ( json_object ( 'name' value first_name || ',' || last_name, 'job' value job_title ))) from hr.departments d, hr.employees e, hr.jobs j where d.department_id = e.department_id and e.job_id = j.job_id group by d.department_name;
And voila! You have your JSON! $IMG0
From time-to-time you'll want to cast a value to a different data type. This can bring problems if your values are incompatible with desired the type.
You could overcome this with the validate_conversion function we discussed above. But there is another way. Cast now has a “default on conversion error” clause.
This specifies which value Oracle returns if it can’t convert the expression to the type you wanted. For example, say you're attempting to cast a varchar2 column to a date. But it happens to include the value "not a date". You'd get a nasty error:
select cast ( 'not a date' as date ) from dual; ORA-01858: a non-numeric character was found where a numeric was expected
With the new clause you can tell Oracle to return a "magic date" instead of throwing an exception. For example:
select cast ( 'not a date' as date default date'0001-01-01' on conversion error ) dt from dual; DT -------------------- 01-JAN-0001 00:00:00
You can then add checks to your code for this magic value.
Note that the default value has to match the data type you're converting to. So if you're casting to a date, you can't return a string:
select cast ( '01012010' as date default 'not a date' on conversion error ) dt from dual; ORA-01858: a non-numeric character was found where a numeric was expected
And, as with validate_conversion, cast uses your NLS settings for the default format. If you want to override these, pass the format as a parameter:
select cast ( '01012010' as date default '01010001' on conversion error, 'ddmmyyyy' ) dt from dual; DT -------------------- 01-JAN-2010 00:00:00
This is neat. But at first glance it seems, well, limited. After all, how often do you use cast? If you're like me, the answer is "rarely".
But there's more to it than that!
The conversion error clause also applies to other casting functions. Such as:
to_date() to_number() to_yminterval() etc. Now that's really useful! These are functions you use all the time.
So you can write data type conversions like this:
select to_date( 'not a date' default '01010001' on conversion error, 'ddmmyyyy' ) dt from dual; DT -------------------- 01-JAN-0001 00:00:00
Combine this with validate_conversion makes changing expressions to a new data type much easier.
We've covered a lot of new functionality. Some if it you'll use straight away. Other bits you'll wait a while.
In any case, when you upgrade to 12.2 you'll want to test all your code to ensure it works as expected. Which brings the question:
"How much of my code did the tests actually run?"
Coverage metrics will help immensely with this.
$IMG4Simple line level analysis of the tests isn't good enough. To see why, consider the following code:
We have a basic function that returns its argument and calls dbms_output.
The procedure calls the function twice in a single if statement:
create or replace function f (p int) return int as begin dbms_output.put_line('Executed: ' || p); return p; end; / create or replace procedure p is begin if f(1) = 1 or f(2) = 2 then dbms_output.put_line('this'); else dbms_output.put_line('that'); end if; end p; /
Due to short-circuit evaluation, f(2) is never executed! You can see this from the output:
SQL> exec p; Executed: 1
this Anything working at the line level will incorrectly report this as fully covered.
To overcome this, you need to details of "basic block" executions.
So what is a "basic block"?
It's a piece of code that you either runs completely or not at all. Code always belongs to exactly one basic block. For example:
if f(1) = 1 or f(2) = 2 then dbms_output.put_line('this'); else dbms_output.put_line('that'); end if;
has four basic blocks. One for each call to f and two for the calls to dbms_output.put_line.
The new code coverage functionality measures and reports on these basic blocks.
Using it is easy. First you need to create coverage tables to store the metrics:
exec dbms_plsql_code_coverage.create_coverage_tables;
Then call start_coverage before your test and stop_coverage after:
declare run_id pls_integer; begin run_id := dbms_plsql_code_coverage.start_coverage('TEST'); p; dbms_plsql_code_coverage.stop_coverage; end; /
You can then get metrics by querying the dbmspcc* tables that hold these details:
select owner, name, type, round( ( sum(covered)/count(*) * 100), 2) pct_covered from dbmspcc_runs r join dbmspcc_units u on r.run_id = u.run_id join dbmspcc_blocks b on r.run_id = b.run_id and u.object_id = b.object_id where r.run_comment = 'TEST' group by owner, name, type; OWNER NAME TYPE PCT_COVERED ----- ----- ---------- ----------- CHRIS P PROCEDURE 50 CHRIS F FUNCTION 100
This is all well and good. But there's always some code which your tests don't cover. Maybe it's deprecated, so you don't need test it. Or it's "just-in-case" code to cover theoretically possible but practically impossible cases. Such as the infamous "when others" exception handler.
You want to exclude these sections from your reports. Fortunately you can with the coverage pragma. By marking lines as "NOT_FEASIBLE" you can filter these out of your reports:
create or replace procedure p is begin if f(1) = 1 or f(2) = 2 then dbms_output.put_line('this'); else pragma coverage ('NOT_FEASIBLE'); dbms_output.put_line('that'); end if; end p; /
Rerun the tests and you can hide the untestable parts in your report!
Materialized views (MVs) can give amazing performance boost. Once you create one based on your query, Oracle can get the results direct from the MV instead of executing the statement itself. This can make SQL significantly faster. Especially when the query processes millions of rows but there are only a handful in the output.
There's just one problem.
The data in the MV has to be fresh. Otherwise Oracle won't do the rewrite.
You could of course query the MV directly. But the data will still be old.
So you need to keep the materialized view up-to-date. The easiest way is to declare it as "fast refresh on commit".
But this is easier said than done. Doing this has a couple of issues:
Only some queries support on commit refreshes Oracle serializes MV refreshes So if you have complex SQL you may not be able to use query rewrite. And even if you can, on high transaction systems the refresh overhead may cripple your system.
So instead of "fast refresh on commit", you make the MV "fast refresh on demand". And create a job to update it. Which runs every second!
But no matter how frequently you run the job, there will always be times when the MV is stale. So query performance could switch between lightning fast and dog slow. A guaranteed way to upset your users!
So how do you overcome this?
With real time materialized views!
These give the best of both worlds. You can refresh your MV on demand. But still have it return up-to-date information.
To do this, create the MV with the clause:
on query computation For example:
create table t (x not null primary key, y not null) as select rownum x, mod(rownum, 10) y from dual connect by level <= 1000; create materialized view log on t with rowid (x, y) including new values; create materialized view mv refresh fast on demand enable on query computation enable query rewrite as select y , count(*) c1 from t group by y;
With this, you can add more data to your table:
insert into t insert into t select 1000+rownum, 1 from dual connect by level <= 100; commit;
And Oracle can still use the MV to rewrite. Even though the MV is stale!
select /*+ rewrite */y , count(*) from t group by y;
It does this by:
Querying the stale MV Then applying the inserts, updates and deletes in the MV log to it This can lead to some scary looking execution plans!
$IMG2The point to remember is Oracle is reading the materialized view log. Then applying the changes to the MV. So the longer you leave it between refreshes, the more data there will be. You'll need to test to find the sweet spot to balancing the refresh process and applying MV change logs on query rewrite.
You can even get the up-to-date information when you query the MV directly. To do so, add the fresh_mv hint:
select /*+ fresh_mv */* from mv;
Starting in 12.2 we've increased this limit. The maximum is now 128 bytes. So now you can create objects like:
create table with_a_really_really_really_really_really_long_name ( and_lots_and_lots_and_lots_and_lots_and_lots_of int, really_really_really_really_really_long_columns int );
Remember: the limit is 128 bytes. Not characters. So if you’re using a multi-byte character set, you’ll find you can’t create:
create table tablééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééééé ( is_67_chars_but_130_bytes int );
This is because é uses two bytes in character sets such as UTF8. So even though the string above is only 67 characters, it needs 130 bytes!
select table_name, listagg(index_name, ',') within group (order by index_name) inds from user_indexes group by table_name;
This is all very well and good. But there's a potential problem with it. Listagg() returns a varchar2. This is limited to 4,000 bytes (32,767 if you’re using extended data types).
So in 12.1 and 11.2, you needed 130 or more indexes on a table before you start running into issues.
And if you have that many indexes on one table, you've got bigger problems than hitting this limit!
Create too many of these and your listagg query will throw frustrating ORA-01489 errors.
To get around this is tricky. So in 12.2 we've added an overflow clause. To use it, place "on overflow truncate" after the separator:
select table_name, listagg(index_name, ',' on overflow truncate ) within group (order by index_name) inds from user_indexes group by table_name;
With this in place, instead of an exception your output will now look something like:
...lots_and_lots_and_lots,of_indexes,...(42) The "…" at the end indicates that the output is larger than Oracle can return. The number in brackets how many characters Oracle trimmed from the results. So not only can you see there is more data, you get an indication of how much there is.
The full syntax of this is:
listagg ( things, ',' [ on overflow (truncate|error) ] [ text ] [ (with|without) count ] ) within group (order by cols)
Now you can explicitly say whether you want error or truncation semantics. There's a good chance you've already written code to handle the ORA-1489 errors. So to keep the behaviour of your code the same, the default remains error.
The text and count clauses control what appears at the end of the string. If you want to replace "..." with "more", "extra" or a "click for more" hyperlink, just provide your new string!
select table_name, listagg(index_name, ',' on overflow truncate ) within group (order by index_name) inds from user_indexes group by table_name;
You can also remove the number of trimmed characters by specifying "without count".
It’s one of those all too common problems. Validating a date is indeed a date.
A prime cause of this is the terrible practice of storing dates as strings. One of the biggest issues it is enables people to store things that clearly aren't dates in "date" columns:
create table dodgy_dates ( id int, is_this_a_date varchar2(20) ); insert into dodgy_dates values (1, 'abc');
Along with a whole bunch of values that might be dates. If you supply the correct format mask:
insert into dodgy_dates values (2, '20150101'); insert into dodgy_dates values (3, '01-jan-2016'); insert into dodgy_dates values (4, '01/01/2016');
Returning only the valid dates is tricky. If you try to convert everything using to_date(), you'll get exceptions:
select t.* from dodgy_dates t where to_date(is_this_a_date) < sysdate; ORA-01858: a non-numeric character was found where a numeric was expected
or maybe:
ORA-01861: literal does not match format string or
ORA-01843: not a valid month
To make your life easier, we've created a new function, validate_conversion. You pass this a value and a data type. Then Oracle will tell you whether it can do the conversion. If it can, it returns one. Otherwise you get zero.
To return the rows in the table that can be real dates, place this in your where clause:
select t.* from dodgy_dates t where validate_conversion(is_this_a_date as date) = 1; ID IS_THIS_A_DATE ---------- -------------------- 3 01-jan-2016
There's no error. But where did rows 2 and 4 go? They're possible dates too! Validate_conversion only tests one date format at a time. By default this is your NLS_date_format
Each client can set their own format. So if you rely on this, you may get unexpected results…
To avoid this, I strongly recommend you pass the format as a parameter. For example:
select t.* from dodgy_dates t where validate_conversion(is_this_a_date as date, 'yyyymmdd') = 1; ID IS_THIS_A_DATE ---------- -------------------- 2 20150101
So to return all the possible dates, you'll need to call this multiple times:
select t.* from dodgy_dates t where validate_conversion(is_this_a_date as date, 'yyyymmdd') = 1 or validate_conversion(is_this_a_date as date, 'dd/mm/yyyy') = 1 or validate_conversion(is_this_a_date as date, 'dd-mon-yyyy') = 1; ID IS_THIS_A_DATE ---------- -------------------- 2 20150101 3 01-jan-2016 4 01/01/2016
And this isn't just for dates. You can use validate_conversion with any of the following data types:
binary_double binary_float date interval day to second interval year to month number timestamp timestamp with time zone If you want to convert strings to dates, you'll need similar logic in the select. This will test the expression against various format masks. If it matches, call to_date with the relevant mask:
case when validate_conversion(is_this_a_date as date, 'yyyymmdd') = 1 then to_date(is_this_a_date, 'yyyymmdd') when validate_conversion(is_this_a_date as date, 'dd/mm/yyyy') = 1 then to_date(is_this_a_date, 'dd/mm/yyyy') when validate_conversion(is_this_a_date as date, 'dd-mon-yyyy') = 1 then to_date(is_this_a_date, 'dd-mon-yyyy') end
But this is clunky. Fortunately, 12.2 has more functionality to support data type conversions:
Times change. New code quickly becomes legacy code. And legacy code is often superseded by better, faster code. So you deprecate the old code.
$IMG3But this brings a problem:
How do you stop people using the legacy modules?
People tend to stick with what they know. Even after you've repeatedly told everyone to move to the new module there's always (at least) one developer who insists on using the deprecated procedure instead of the newer, shinier option.
And in complex applications it's tough to keep track of what's obsolete.
This is tough to solve.
So to help you with the deprecation process, we've introduced a new pragma for this.
To use it, place
pragma deprecate ( deprecated_thing, 'Message to other developers' );
below the retired section.
Great. But how does it help?
We've added a bunch of new PL/SQL warnings: PLW-6019 to PLW-6022. Enable these and Oracle will tell you if you're using deprecated code:
alter session set plsql_warnings = 'enable:(6019,6020,6021,6022)'; create or replace procedure your_old_code is pragma deprecate ( your_old_code, 'This is deprecated. Use new_code instead!' ); begin null; end your_old_code; / show err Warning(2,3): PLW-06019: entity YOUR_OLD_CODE is deprecated
This is great. But we've all been ignoring the "AUTHID DEFINER" warning forever! If code is truly obsolete, it would be good if you could stop people using it all together.
Fortunately you can!
Here's the great thing about warnings. You can upgrade them to be errors! PLW-6020 is thrown when you write code calling a deprecated item. Set this to error and the offending code won't compile:
alter session set plsql_warnings = 'error:6020'; create or replace procedure calling_old_code is begin your_old_code(); end calling_old_code; / sho err 3/3 PLS-06020: reference to a deprecated entity: This is deprecated. Use new_code instead!
Of course, if you turn PLW-6020 into an error system wide, a lot of stuff might break! So you can selectively upgrade it on given objects:
alter procedure calling_old_code compile plsql_warnings = 'error:6020' reuse settings;
So now you have the power to force others to stop using pre-historic code.
If you like dEexams.com and would like to contribute, you can write your article here or mail your article to admin@deexams.com . See your article appearing on the dEexams.com main page and help others to learn.