Feed aggregator

Domain Indexes -- 3 : CTXCAT Index

Hemant K Chitale - Sat, 2018-04-21 11:14
In previous posts in December 2017, I had demonstrated a CONTEXT Index.

A CONTEXT Index is used for full-text retrieval from large pieces of text (or document formats stored in LOBs)

A CTXCAT Index is best suited for small fragments of text that are to be indexed with other relational data.

Before I begin with the CTXCAT index, in addition to the CTXAPP role (that I had granted during the earlier demonstration), the account also needs the CREATE TRIGGER privilege.

SQL> grant ctxapp to ctxuser;

Grant succeeded.

SQL> grant create trigger to ctxuser;

Grant succeeded.

SQL>


I can now proceed with the CTXUSER demonstration.

SQL> connect ctxuser/ctxuser
Connected.
SQL> create table books
2 (book_id integer primary key,
3 book_title varchar2(250) not null,
4 book_author varchar2(80),
5 book_subject varchar2(25),
6 shelf_id integer)
7 /

Table created.

SQL>
SQL> insert into books values
2 (1,'A Study In Scarlet','Arthur Conan Doyle','Mystery',1);

1 row created.

SQL> insert into books values
2 (2,'The Sign Of Four','Arthur Conan Doyle','Mystery',1);

1 row created.

SQL> insert into books values
2 (3,'Murder On The Orient Express','Agatha Christie','Mystery',1);

1 row created.

SQL> insert into books values
2 (4,'A Brief History of Time','Stephen Hawking','Science - Physics',2);

1 row created.

SQL>
SQL> insert into books values
2 (5,'2001: A Space Odyssey','Arthur C Clarke','Science Fiction',3);

1 row created.

SQL>
SQL> commit;

Commit complete.

SQL>


Next, I specify what is called an Index Set -- which specifies the structured columns that are to be included in the CTXCAT Index.  I then define the CTXCAT Index on the BOOK_TITLE column.

SQL> begin
2 ctx_ddl.create_index_set('books_set');
3 ctx_ddl.add_index('books_set','book_subject');
4 ctx_ddl.add_index('books_set','shelf_id');
5 end;
6 /

PL/SQL procedure successfully completed.

SQL>
SQL> create index books_title_index
2 on books (book_title)
3 indextype is ctxsys.ctxcat
4 parameters ('index set books_set')
5 /

Index created.

SQL>


Now, I can use the Index to query the table, using the CATSEARCH clause instead of the CONTAINS clause. My query includes both BOOK_TITLE and SHELF_ID

SQL> select book_title,book_author,book_subject,shelf_id
2 from books
3 where catsearch (book_title,'History','shelf_id=1') > 0
4 /

no rows selected

SQL> select book_title,book_author,book_subject,shelf_id
2 from books
3 where catsearch (book_title,'History','shelf_id>1') > 0
4 /

BOOK_TITLE
--------------------------------------------------------------------------------
BOOK_AUTHOR
--------------------------------------------------------------------------------
BOOK_SUBJECT SHELF_ID
------------------------- ----------
A Brief History of Time
Stephen Hawking
Science - Physics 2


SQL>


The CTXCAT Index that I built on BOOK_TITLE also includes BOOK_SUBJECT and SHELF_ID as indexed columns by virtue of the INDEX_SET called "BOOKS_SET".

Now, I add another row and verify if I need to Sync the index (as I had to do with the CONTEXT Index earlier).

SQL> insert into books
2 values
3 (6,'The Selfish Gene','Richard Dawkins','Evolution',2);

1 row created.

SQL> commit;
SQL> select book_title,book_author,book_subject,shelf_id
2 from books
3 where catsearch (book_title,'Gene','book_subject > ''S'' ') > 0
4 /

no rows selected

SQL> select book_title,book_author,book_subject,shelf_id
2 from books
3 where catsearch (book_title,'Gene','book_subject > ''E'' ') > 0
4 /

BOOK_TITLE
--------------------------------------------------------------------------------
BOOK_AUTHOR
--------------------------------------------------------------------------------
BOOK_SUBJECT SHELF_ID
------------------------- ----------
The Selfish Gene
Richard Dawkins
Evolution 2


SQL>


Note, specifically, how I could use the BOOK_SUBJECT in the query as if looking up a separate index on BOOK_SUBJECT.
The new book was included in the index without a call to CTX_DDL.SYNC_INDEX as would be required for the CONTEXT IndexType.

The portion of the query that is on the BOOK_TITLE column does a Text search on this column but the portions on BOOK_SUBJECT an SHELF_ID behave as with regular indexes.


(I know  that some readers will dispute the subject categorization "Evolution"  but I deliberately threw that in so that I  could show a query that uses a predicate filter not on "Science").

.
.
.




Categories: DBA Blogs

Oracle VM Server: my first vm: Error: HVM guest support is unavailable

Dietrich Schroff - Sat, 2018-04-21 09:47
All my tests with Oracle VM Server are running inside Oracle Virtualbox. If you want to do some tests yourself with this setup, you can easily get this error message after powering on your VM:

Server error: Command: ['xm', 'create', '/OVS/Repositories/0004fb0000030000dad74d9c43176d2e/VirtualMachines/0004fb00000600005e79798ecb1a63cf/vm.cfg'] failed (1): stderr: Error: HVM guest support is unavailable: is VT/AMD-V supported by your CPU and enabled in your BIOS?
stdout: Using config file "/OVS/Repositories/0004fb0000030000dad74d9c43176d2e/VirtualMachines/0004fb00000600005e79798ecb1a63cf/vm.cfg".To get your system running, you have to change this for your Oracle VM Server node on VirtualBox:


After that you get:

Server error: Command: ['xm', 'create', '/OVS/Repositories/0004fb0000030000dad74d9c43176d2e/VirtualMachines/0004fb0000060000b5dca8dccb8b74f6/vm.cfg'] failed (1): stderr: Error: Boot loader didn't return any data!
stdout: Using config file "/OVS/Repositories/0004fb0000030000dad74d9c43176d2e/VirtualMachines/0004fb0000060000b5dca8dccb8b74f6/vm.cfg".But this is due to specifying no boot media at your VM. Therefore you have to add an ISO image to your Oracle VM Server repository.

Can I do it with PostgreSQL? – 19 – Create user … identified by values

Yann Neuhaus - Sat, 2018-04-21 06:39

Puh, that last post in this series is already half a year old. Time is moving too fast :( Today, while being at a customer again, this question came up: Can I do something comparable in PostgreSQL to what I can do in Oracle, which is: Create a user and provide the hashed password so that the password is the same on the source and the target (which implies not knowing the password at all)? In Oracle you can find the hashed passwords in user$ where can I find that in PostgreSQL? Lets go.

When we look at the “create user” command there is no option which seems to do that:

postgres=# \h create user
Command:     CREATE USER
Description: define a new database role
Syntax:
CREATE USER name [ [ WITH ] option [ ... ] ]

where option can be:

      SUPERUSER | NOSUPERUSER
    | CREATEDB | NOCREATEDB
    | CREATEROLE | NOCREATEROLE
    | INHERIT | NOINHERIT
    | LOGIN | NOLOGIN
    | REPLICATION | NOREPLICATION
    | BYPASSRLS | NOBYPASSRLS
    | CONNECTION LIMIT connlimit
    | [ ENCRYPTED ] PASSWORD 'password'
    | VALID UNTIL 'timestamp'
    | IN ROLE role_name [, ...]
    | IN GROUP role_name [, ...]
    | ROLE role_name [, ...]
    | ADMIN role_name [, ...]
    | USER role_name [, ...]
    | SYSID uid

Maybe we can just pass the hashed password? Lets try be creating a new user:

postgres=# create user u with login password 'u';
CREATE ROLE

The hashed passwords in PostgreSQL are stored in pg_shadow:

postgres=# select passwd from pg_shadow where usename = 'u';
               passwd                
-------------------------------------
 md56277e2a7446059985dc9bcf0a4ac1a8f
(1 row)

Lets use that hash and create a new user:

postgres=# create user w login encrypted password 'md56277e2a7446059985dc9bcf0a4ac1a8f';
CREATE ROLE

Can we login as w using “u” as a password?

postgres@pgbox:/home/postgres/ [PG10] psql -X -h 192.168.22.99 -p $PGPORT -U w postgres -W
Password for user u: 
psql: FATAL:  no pg_hba.conf entry for host "192.168.22.99", user "w", database "postgres", SSL off

Ok, makes sense. After fixing that:

postgres@pgbox:/home/postgres/ [PG10] psql -X -h 192.168.22.99 -p $PGPORT -U w postgres -W
Password for user w: 
psql: FATAL:  password authentication failed for user "w"

So obviously this is not the way to do it. Do we have the same hashes in pg_shadow?

postgres=# select usename,passwd from pg_shadow where usename in ('w','u');
 usename |               passwd                
---------+-------------------------------------
 u       | md56277e2a7446059985dc9bcf0a4ac1a8f
 w       | md56277e2a7446059985dc9bcf0a4ac1a8f
(2 rows)

Hm, exactly the same. Why can’t we login then? The answer is in the documentation:”Because MD5-encrypted passwords use the role name as cryptographic salt, …”. We can verify that be re-creating the “w” user using the same password as that of user “u”:

postgres=# drop user w;
DROP ROLE
postgres=# create user w login password 'u';
CREATE ROLE
postgres=# select usename,passwd from pg_shadow where usename in ('w','u');
 usename |               passwd                
---------+-------------------------------------
 u       | md56277e2a7446059985dc9bcf0a4ac1a8f
 w       | md53eae63594a41739e87141e8333d15f73
(2 rows)

The hashed values are not the same anymore. What of course is working is to re-create the user with that hash:

postgres=# drop role w;
DROP ROLE
postgres=# create user w login password 'md53eae63594a41739e87141e8333d15f73';
CREATE ROLE

Now we should be able to login with the password ‘u':

postgres@pgbox:/home/postgres/ [PG10] psql -X -h 192.168.22.99 -p $PGPORT -U w postgres -W
Password for user w: 
psql (10.0 dbi services build)
Type "help" for help.

postgres=> 

Fine. Another way of getting the password hashes is to use pg_dumpall using the “–globals-only” switch:

postgres@pgbox:/home/postgres/ [PG10] pg_dumpall --globals-only > a.sql
postgres@pgbox:/home/postgres/ [PG10] grep -w w a.sql 
CREATE ROLE w;
ALTER ROLE w WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB LOGIN NOREPLICATION NOBYPASSRLS PASSWORD 'md53eae63594a41739e87141e8333d15f73';

Hope that helps.

 

Cet article Can I do it with PostgreSQL? – 19 – Create user … identified by values est apparu en premier sur Blog dbi services.

GCP - How to manage SSH keys on VM Instance?

Surachart Opun - Fri, 2018-04-20 23:13
On Google Cloud Platform, adding SSH keys in Metadata (project-wide public SSH keys). It can help to ssh to every VM instances on Compute Engine easily but it's not a good idea. We are able to do for test, but should not use on Production. We should add SSH Key in OS login. 
https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys#risks



Question:
How can we block SSH Keys from Metadata(project-wide public SSH keys) on VM instance?
Answer: We can block by checking "Block project-wide SSH keys" on each instance.

In case we have SSH Key on metadata. We are able to ssh by using private key and login like.

So, we block it... On "Compute Engine" - "VM Instances", click [instance name] and "Edit". To check "Block project-wide SSH keys" and "Save".


VM instance should refuse Key what 's not in SSH  Keys of VM instance. (You can remove SSH Keys of project owner on instance, but it will be automatic added when you click "SSH" on GUI).

Additional, we should review and remove SSH Keys in metadata(project-wide public SSH keys), if we ensure we have not used. (Don't remove ssh key of project owner).

After removing, We would like to add SSH Key and don't want to add it in OS login. We are able to add it in SSH Keys on Instance like.


Assume: username is "myuser".

First of all, we have to generate Private and Public Keys. Example uses "PuTTY Key Generator". Because I use "Putty.


Then "Save private key" (We have to use when putty to server) and "Save public key".

To use public key on VM instance, click "Add item".



Example: It's [public key] [username]  

 then "Save".
Note: In picture, it's highlight about [username]

Open "putty", select "Private key file for authentication", fill in ip address and connect.

it's easy, right?
myuser@centos7:~$ id
uid=1003(myuser) gid=1004(myuser) groups=1004(myuser),4(adm),30(dip),44(video),46(plugdev),1000(google-sudoers)If we use command "id [user in project-wide SSH keys], we still see it, but it's unable to ssh on this VM instance.
myuser@centos7:~$ id opun
uid=1001(opun) gid=1002(opun) groups=1002(opun),4(adm),30(dip),44(video),46(plugdev),
Reference:
Categories: DBA Blogs

Relocate Goldengate Processes to Other Node with agctl

Pakistan's First Oracle Blog - Fri, 2018-04-20 22:00
Oracle Grid Infrastructure Agents can be used to manage Oracle Goldengate through Oracle GI. agctl is the utility to add, modify and relocate the goldengate. These Oracle GI agents can also be used with other products like weblogic, mysql etc. 


Frits has a good article about installation and general commands regarding GI agents for a non-clustered environment.

Following is the command to relocate Goldengate processes to other node with agctl. 


[gi@hostname ~]$ agctl status goldengate [service_name]
[gi@hostname ~]$ agctl config goldengate [service_name] [gi@hostname ~]$ agctl relocate goldengate [service_name] --node [node_name] [gi@hostname ~]$ agctl config goldengate [service_name] [gi@hostname ~]$ agctl status goldengate [service_name]

Hope that helps.
Categories: DBA Blogs

utl_http.begin_request results in protocol error when url size is big

Tom Kyte - Fri, 2018-04-20 17:26
Hi, while using utl_http package, we are able to make calls to a 3rd party webservice and all was going good till we hit transaction which resulted in big URL size - for ex one transaction had multiple rejections and url size is bigger than normal ...
Categories: DBA Blogs

Upgrade to 12c - High Fetch time vs. Low execution time

Tom Kyte - Fri, 2018-04-20 17:26
Hi Tom, We are migrating our databases from Oracle 11.2.0.3 to Oracle 12.1.0.2.0R1 on Exadata and after we did this, we are seeing extreme slowness in loading 3 of our application screens, even though the queries are running as or more efficiently...
Categories: DBA Blogs

BI Publisher Desktop 12.2 Certified with E-Business Suite

Steven Chan - Fri, 2018-04-20 11:14

[Contributing Author: Pieter Breugelmans]

You can use Oracle Business Intelligence Publisher (BI Publisher) to create and manage reports for E-Business Suite data. Oracle BI Publisher Desktop 12.2.1.3.0 is certified with Oracle E-Business Suite Release 12.2 and 12.1 for the following Microsoft Office releases:

  • Microsoft Office 2016
  • Microsoft Office 2013
  • Microsoft Office 2010

What does this certification cover?

Oracle BI Publisher Desktop consists of client-side tools to assist in the design and testing of layout templates for Oracle E-Business Suite Release 12.2 and 12.1. These layout templates are executed by Oracle BI Publisher on the Oracle E-Business Suite application tier as illustrated in the diagram below. The desktop utility consists of the following tools:

  • Template Builder Add-in for Microsoft Word - to build RTF layout templates
  • Template Builder Add-in for Microsoft Excel - to build Excel layout templates
  • Template Viewer - to test and debug all supported Oracle BI Publisher layout template types

For details, see:

Related Articles

 

Categories: APPS Blogs

Partner Webcast – Oracle Container Native Application Development Platform – Use with Kubernetes

Containerization of cloud applications rapidly becomes the right (only) way to deploy complex systems architected with microservice approach in mind. Containers solve one of the fundamental issues of...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Partitioning vs Indexing

Tom Kyte - Thu, 2018-04-19 23:06
Hi Tom, i have a question in partitioning by list of a table. I have a set of tables which need to be hystoricized once a new record is inserted: then i have a STATUS column which flag an active status (AT) and a historic one (ST). To match this re...
Categories: DBA Blogs

Oralce Open v$open_cursor counts simple "updates" as open with the use of a cursor (open, execute, fetch, close, commit)

Tom Kyte - Thu, 2018-04-19 23:06
I am checking for open cursors while running our client server application (application info below) with the query below and noticed that a simple ?update? without the use of any cursors shows as open cursor. When another ?update? is issued its repla...
Categories: DBA Blogs

Performance issue/session getting hang

Tom Kyte - Thu, 2018-04-19 23:06
Hi Tom, I have a table having around 5 million records. Table Structure : DESC RPT_MSG_CHANGE <code>Name Null Type ---------- -------- -------------- OID NOT NULL NUMBER PRODUCT NOT NULL VARCHAR2(20)...
Categories: DBA Blogs

Quarterly EBS Upgrade Recommendations: April 2018 Edition

Steven Chan - Thu, 2018-04-19 11:50

We've previously provided advice on the general priorities for applying EBS updates and creating a comprehensive maintenance strategy.   

Here are our latest upgrade recommendations for E-Business Suite updates and technology stack components.  These quarterly recommendations are based upon the latest updates to Oracle's product strategies, latest support timelines, and newly-certified releases

You can research these yourself using this Note:

Upgrade Recommendations for April 2018

  EBS 12.2  EBS 12.1  EBS 12.0  EBS 11.5.10 Check your EBS support status and patching baseline

Apply the minimum 12.2 patching baseline
(EBS 12.2.3 + latest technology stack updates listed below)

In Premier Support to September 30, 2023

Apply the minimum 12.1 patching baseline
(12.1.3 Family Packs for products in use + latest technology stack updates listed below)

In Premier Support to December 31, 2021

In Sustaining Support. No new patches available.

Upgrade to 12.1.3 or 12.2

Before upgrading, 12.0 users should be on the minimum 12.0 patching baseline

In Sustaining Support. No new patches available.

Upgrade to 12.1.3 or 12.2

Before upgrading, 11i users should be on the minimum 11i patching baseline

Apply the latest EBS suite-wide RPC or RUP

12.2.7
Sept. 2017

12.1.3 RPC5
Aug. 2016

12.0.6

11.5.10.2
Use the latest Rapid Install

StartCD 51
Feb. 2016

StartCD 13
Aug. 2011

12.0.6


11.5.10.2

Apply the latest EBS technology stack, tools, and libraries

AD/TXK Delta 10
Sept. 2017

FND
Apr. 2017

EBS 12.2.6 OAF Update 11
Apr. 2018

EBS 12.2.5 OAF Update 19
Jan. 2018

EBS 12.2.4 OAF Update 18
Dec. 2017

ETCC
Jan. 2018

Web Tier Utilities 11.1.1.9

Daylight Savings Time DSTv28
Nov. 2016

Upgrade to JDK 7

Web ADI Bundle 5
Jan. 2018

Report Manager Bundle 5
Jan. 2018

FND
Apr. 2017

OAF Bundle 5
Jun. 2016

JTT Update 4
Oct. 2016

Daylight Savings Time DSTv28
Nov. 2016

Upgrade to JDK 7

 

 

Apply the latest security updates

Apr. 2018 Critical Patch Update

SHA-2 PKI Certificates

SHA-2 Update for Web ADI & Report Manager

Migrate from SSL or TLS 1.0 to TLS 1.2

Sign JAR files

Apr. 2018 Critical Patch Update

SHA-2 PKI Certificates

SHA-2 Update for Web ADI & Report Manager

Migrate from SSL or TLS 1.0 to TLS 1.2

Sign JAR files

Oct. 2015 Critical Patch Update April 2016 Critical Patch Update Use the latest certified desktop components

Use the latest JRE 1.8, 1.7, or 1.6 release that meets your requirements.

Switch to Java Web Start

Upgrade to IE 11

Upgrade to Firefox ESR 52

Upgrade Office 2003 and Office 2007 to later Office versions (e.g. Office 2016)

Upgrade Windows XP and Vista and Win 10v1507 to later versions (e.g. Windows 10v1607)

Use the latest JRE 1.8, 1.7, or 1.6 release that meets your requirements

Switch to Java Web Start

Upgrade to IE 11

Upgrade to Firefox ESR 52

Upgrade Office 2003 and Office 2007 to later Office versions (e.g. Office 2016)

Upgrade Windows XP and Vista and Win 10v1507 to later versions (e.g. Windows 10v1607)

    Upgrade to the latest database Database 11.2.0.4 or 12.1.0.2 Database 11.2.0.4 or 12.1.0.2 Database 11.2.0.4 or 12.1.0.2 Database 11.2.0.4 or 12.1.0.2 If you're using Oracle Identity Management

Upgrade to Oracle Access Manager 11.1.2.3

Upgrade to Oracle Internet Directory 11.1.1.9

Migrate from Oracle SSO to OAM 11.1.2.3

Upgrade to Oracle Internet Directory 11.1.1.9

    If you're using Oracle Discoverer

Migrate to Oracle
Business Intelligence Enterprise Edition (OBIEE), Oracle Business
Intelligence Applications (OBIA).

Discoverer 11.1.1.7 is in Sustaining Support as of June 2017

Migrate to Oracle
Business Intelligence Enterprise Edition (OBIEE), Oracle Business
Intelligence Applications (OBIA).

Discoverer 11.1.1.7 is in Sustaining Support as of June 2017

    If you're using Oracle Portal Migrate to Oracle WebCenter  11.1.1.9 Migrate to Oracle WebCenter 11.1.1.9 or upgrade to Portal 11.1.1.6 (End of Life Jun. 2017).

 

 
Categories: APPS Blogs

15 Minutes to get a Kafka Cluster running on Kubernetes – and start producing and consuming from a Node application

Amis Blog - Thu, 2018-04-19 11:07

imageFor  workshop I will present on microservices and communication patterns I need attendees to have their own local Kafka Cluster. I have found a way to have them up and running in virtually no time at all. Thanks to the combination of:

  • Kubernetes
  • Minikube
  • The Yolean/kubernetes-kafka GitHub Repo with Kubernetes yaml files that creates all we need (including Kafka Manager)

Prerequisites:

  • Minikube and Kubectl are installed
  • The Minikube cluster is running (minikube start)

In my case the versions are:

Minikube: v0.22.3, Kubectl Client 1.9 and (Kubernetes) Server 1.7:

image

The steps I went through:

Git Clone the GitHub Repository: https://github.com/Yolean/kubernetes-kafka 

From the root directory of the cloned repository, run the following kubectl commands:

(note: I did not know until today that kubectl apply –f can be used with a directory reference and will then apply all yaml files in that directory. That is incredibly useful!)

kubectl apply -f ./configure/minikube-storageclass-broker.yml
kubectl apply -f ./configure/minikube-storageclass-zookeeper.yml

(note: I had to comment out the reclaimPolicy attribute in both files – probably because I am running a fairly old version of Kubernetes)

kubectl apply -f ./zookeeper

kubectl apply -f ./kafka

(note: I had to change API version in 50pzoo and 51zoo as well as in 50kafka.yaml from apiVersion: apps/v1beta2 to apiVersion: apps/v1beta1 – see https://github.com/kubernetes/kubernetes/issues/55894 for details; again, I should upgrade my Kubernetes version)

To make Kafka accessible from the minikube host (outside the K8S cluster itself)

kubectl apply -f ./outside-services

This exposes Services as type NodePort instead of ClusterIP, making them available for client applications that can access the Kubernetes host.

I also installed (Yahoo) Kafka Manager:

kubectl apply -f ./yahoo-kafka-manager

(I had to change API version in kafka-manager from apiVersion: apps/v1beta2 to apiVersion: apps/v1beta1 )

At this point, the Kafka Cluster is running. I can check the pods and services in the Kubernetes Dashboard as well as through kubectl on the command line. I can get the Port at which I can access the Kafka Brokers:

image

And I can access the Kafka Manager at the indicated Port.

image

Initially, no cluster is visible in Kafka Manager. By providing the Zookeeper information highlighted in the figure (zookeeper.kafka:2181) I can make the cluster visible in this user interface tool.

Finally the eating of the pudding: programmatic production and consumption of messages to and from the cluster. Using the world’s simplest Node Kafka clients, it is easy to see the stuff is working. I am impressed.

I have created the Node application and its package.json file. Then added the kafka-node dependency (npm install kafka-node –save). Next I created the producer:

// before running, either globally install kafka-node  (npm install kafka-node)
// or add kafka-node to the dependencies of the local application

var kafka = require('kafka-node')
var Producer = kafka.Producer
KeyedMessage = kafka.KeyedMessage;

var client;
KeyedMessage = kafka.KeyedMessage;

var APP_VERSION = "0.8.5"
var APP_NAME = "KafkaProducer"

var topicName = "a516817-kentekens";
var KAFKA_BROKER_IP = '192.168.99.100:32400';

// from the Oracle Event Hub - Platform Cluster Connect Descriptor
var kafkaConnectDescriptor = KAFKA_BROKER_IP;

console.log("Running Module " + APP_NAME + " version " + APP_VERSION);

function initializeKafkaProducer(attempt) {
  try {
    console.log(`Try to initialize Kafka Client at ${kafkaConnectDescriptor} and Producer, attempt ${attempt}`);
    const client = new kafka.KafkaClient({ kafkaHost: kafkaConnectDescriptor });
    console.log("created client");
    producer = new Producer(client);
    console.log("submitted async producer creation request");
    producer.on('ready', function () {
      console.log("Producer is ready in " + APP_NAME);
    });
    producer.on('error', function (err) {
      console.log("failed to create the client or the producer " + JSON.stringify(err));
    })
  }
  catch (e) {
    console.log("Exception in initializeKafkaProducer" + JSON.stringify(e));
    console.log("Try again in 5 seconds");
    setTimeout(initializeKafkaProducer, 5000, ++attempt);
  }
}//initializeKafkaProducer
initializeKafkaProducer(1);

var eventPublisher = module.exports;

eventPublisher.publishEvent = function (eventKey, event) {
  km = new KeyedMessage(eventKey, JSON.stringify(event));
  payloads = [
    { topic: topicName, messages: [km], partition: 0 }
  ];
  producer.send(payloads, function (err, data) {
    if (err) {
      console.error("Failed to publish event with key " + eventKey + " to topic " + topicName + " :" + JSON.stringify(err));
    }
    console.log("Published event with key " + eventKey + " to topic " + topicName + " :" + JSON.stringify(data));
  });

}

//example calls: (after waiting for three seconds to give the producer time to initialize)
setTimeout(function () {
  eventPublisher.publishEvent("mykey", { "kenteken": "56-TAG-2", "country": "nl" })
}
  , 3000)

and ran the producer:

image

The create the consumer:

var kafka = require('kafka-node');

var client;

var APP_VERSION = "0.8.5"
var APP_NAME = "KafkaConsumer"

var eventListenerAPI = module.exports;

var kafka = require('kafka-node')
var Consumer = kafka.Consumer

// from the Oracle Event Hub - Platform Cluster Connect Descriptor

var topicName = "a516817-kentekens";

console.log("Running Module " + APP_NAME + " version " + APP_VERSION);
console.log("Event Hub Topic " + topicName);

var KAFKA_BROKER_IP = '192.168.99.100:32400';

var consumerOptions = {
    kafkaHost: KAFKA_BROKER_IP,
    groupId: 'local-consume-events-from-event-hub-for-kenteken-applicatie',
    sessionTimeout: 15000,
    protocol: ['roundrobin'],
    fromOffset: 'earliest' // equivalent of auto.offset.reset valid values are 'none', 'latest', 'earliest'
};

var topics = [topicName];
var consumerGroup = new kafka.ConsumerGroup(Object.assign({ id: 'consumerLocal' }, consumerOptions), topics);
consumerGroup.on('error', onError);
consumerGroup.on('message', onMessage);

consumerGroup.on('connect', function () {
    console.log('connected to ' + topicName + " at " + consumerOptions.host);
})

function onMessage(message) {
    console.log('%s read msg Topic="%s" Partition=%s Offset=%d'
    , this.client.clientId, message.topic, message.partition, message.offset);
}

function onError(error) {
    console.error(error);
    console.error(error.stack);
}

process.once('SIGINT', function () {
    async.each([consumerGroup], function (consumer, callback) {
        consumer.close(true, callback);
    });
});

and ran the consumer – which duly consumed the event published by the publisher. It is wonderful.

image

Resources

The main resources is the GitHub Repo: https://github.com/Yolean/kubernetes-kafka . Absolutely great stuff.

Also useful: npm package kafka-node – https://www.npmjs.com/package/kafka-node

Documentation on Kubernetes: https://kubernetes.io/docs/user-journeys/users/application-developer/foundational/#section-2 – with references to Kubectl and Minikube – and the Katakoda playground: https://www.katacoda.com/courses/kubernetes/playground

The post 15 Minutes to get a Kafka Cluster running on Kubernetes – and start producing and consuming from a Node application appeared first on AMIS Oracle and Java Blog.

JavaOne Event Expands with More Tracks, Languages and Communities – and New Name

OTN TechBlog - Thu, 2018-04-19 11:00

The JavaOne conference is expanding to create a new, bigger event that’s inclusive to more languages, technologies and developer communities. Expect more talks on Go, Rust, Python, JavaScript, and R along with more of the great Java technical content that developers have come to expect. We’re calling the new event Oracle Code One, October 22-25 at Moscone West in San Francisco.

Oracle Code One will include a Java technical keynote with the latest information on the Java platform from the architects of the Java team.  It will also have the latest details on Java 11, advances in OpenJDK, and other core Java development.  We are planning dedicated tracks for server side Java EE technology including Jakarta EE (now part of the Eclipse Foundation), Spring, and the latest advances in Java microservices and containers.  Also a wealth of community content on client development, JVM languages, IDEs, test frameworks, etc.

As we expand, developers can also expect additional leading edge topics such as chatbots, microservices, AI, and blockchain. There will also be sessions around our modern open source developer technologies including Oracle JET, Project Fn and OpenJFX.

Finally, one of the things that will continue to make this conference so great is the breadth of community run activities such as Oracle Code4Kids workshops for young developers, IGNITE lightning talks run by local JUG leaders, and an array of technology demos and community projects showcased in the Developer Lounge.  Expect a grand finale with the Developer Community Keynote to close out this week of fun, technology, and community.

Today, we are launching the call for papers for Oracle Code One and you can apply now to be part of any of the 11 tracks of content for Java developers, database developers, full stack developers, DevOps practitioners, and community members.  

I hope you are as excited about this expansion of JavaOne as I am and will join me at the inaugural year of Oracle Code One!

Please submit your abstracts here for consideration:
https://www.oracle.com/code-one/index.html

Long Raw to BLOB

Tom Kyte - Thu, 2018-04-19 04:46
Hi Tom, We are using an Oracle 8.1.7 database. Is there a way in PL/SQL or Java Stored Procedure to convert a Long Raw into a BLOB? Thanks, Firas Khasawneh
Categories: DBA Blogs

Remote and Programmatic Manipulation of Docker Containers from a Node application using Dockerode

Amis Blog - Thu, 2018-04-19 02:23

imageIn previous articles, I have talked about using Docker Containers in smart testing strategies by creating a container image that contains the baseline of the application and the required test setup (test data for example). For each test instead of doing complex setup actions and finishing of with elaborate tear down steps, simply spinning up a container at the beginning and tossing it away at the end.

I have shown how that can be done through the command line – but that of course is not a workable procedure. In this article I will provide a brief introduction of programmatic manipulation of containers. By providing access to the Docker Daemon API from remote clients (step 1) and by leveraging the npm package Dockerode (step 2) it becomes quite simple from a straightforward Node application to create, start and stop containers – as well as build, configure, inspect, pause them and manipulate in other ways. This opens up the way for build jobs to programmatically run tests by starting the container, running the tests against that container and killing and removing the container after the test. Combinations of containers that work together can be managed just as easily.

As I said, this article is just a very lightweight introduction.

Expose Docker Daemon API to remote HTTP clients

The step that to me longest was exposing the Docker Daemon API. Subsequent versions of Docker used different configurations for this and apparently different Linux distributions also have different approaches. I was happy to find this article: https://www.ivankrizsan.se/2016/05/18/enabling-docker-remote-api-on-ubuntu-16-04 that describes for Ubuntu 16.x as Docker Host how to enable access to the API.

Edit file /lib/systemd/system/docker.service – add -H tcp://0.0.0.0:4243 to the entry that describes how to start the Docker Daemon in order to have it listen to incoming requests at port 4243 (note: other ports can be used just as well).

Reload (systemctl daemon-reload) to apply the changed file configuration

Restart the Docker Service: service docker restart

And we are in business.image

A simple check to see if HTTP requests on port 4243 are indeed received and handled: execute this command on the Docker host itself:

curl http://localhost:4243/version

image

The next step is the actual remote access. From a browser running on a machine that can ping successfully to the Docker Host – in my case that is the Virtual Box VM spun up by Vagrant, at IP 192.168.188.108 as defined in the Vagrantfile – open this URL: http://192.168.188.108:4243/version. The result should be similar to this:

image

Get going with Dockerode

To get started with npm package Dockerode is not any different really from any other npm package. So the steps to create a simple Node application that can list, start, inspect and stop containers in the remote Docker host are as simple as:

Use npm init to create the skeleton for a new Node application

Use

npm install dockerode –save

to retrieve Dockerode and create the dependency in package.json.

Create file index.js. Define the Docker Host IP address (192.168.188.108 in my case) and the Docker Daemon Port (4243 in my case) and write the code to interact with the Docker Host. This code will list all containers. Then it will inspect, start and stop a specific container (with identifier starting with db8). This container happens to run an Oracle Database – although that is not relevant in the scope of this article.

var Docker = require('dockerode');
var dockerHostIP = "192.168.188.108"
var dockerHostPort = 4243

var docker = new Docker({ host: dockerHostIP, port: dockerHostPort });

docker.listContainers({ all: true }, function (err, containers) {
    console.log('Total number of containers: ' + containers.length);
    containers.forEach(function (container) {
        console.log(`Container ${container.Names} - current status ${container.Status} - based on image ${container.Image}`)
    })
});

// create a container entity. does not query API
async function startStop(containerId) {
    var container = await docker.getContainer(containerId)
    try {
        var data = await container.inspect()
        console.log("Inspected container " + JSON.stringify(data))
        var started = await container.start();
        console.log("Started "+started)
        var stopped = await container.stop();
        console.log("Stopped "+stopped)
    } catch (err) {
        console.log(err);
    };
}
//invoke function
startStop('db8')

The output in Visual Studio Code looks like this:

SNAGHTML26a0b0e

And the action can be tracked on the Docker host like this (to prove it is real…)image

Resources

Article by Ivan Krizsan on configuring the Docker Daemon on Ubuntu 16.x – my life safer: https://www.ivankrizsan.se/2016/05/18/enabling-docker-remote-api-on-ubuntu-16-04

GitHub Repo for Dockerode – with examples and more: https://github.com/apocas/dockerode

Presentation at DockerCon 2016 that gave me the inspiration to use Dockerode: https://www.youtube.com/watch?v=1lCiWaLHwxo 

Docker docs on Configuring the Daemon – https://docs.docker.com/install/linux/linux-postinstall/#configure-where-the-docker-daemon-listens-for-connections


The post Remote and Programmatic Manipulation of Docker Containers from a Node application using Dockerode appeared first on AMIS Oracle and Java Blog.

Garbage First in JDeveloper

Darwin IT - Thu, 2018-04-19 01:07
At my current customer we work with VDI's: Virtual Desktop Images, that at several times a day very, very slow. Even so slow that it more or less stalls for a minute or two.

JDeveloper is not known as a Ferrari under the IDE's. One of the causes is that by default heap settings is very poor: 128M-800M. Especially when you use it in  SOA or BPM Quickstart then at startup it will need to grow several times. But very soon working in it you'll get out of memory errors.

Because of the VDI's I did several changes to try to improve performance.
Main thing is set Xms and Xmx both at 2048M. I haven't found needing more up to this day.

But I found using the Garbage First collector gives me a slightly better performance.

To set it, together with the heap, add/change the following options in the ide.conf in ${JDEV_HOME}\jdeveloper\ide\bin\:
# Set the default memory options for the Java VM which apply to both 32 and 64-bit VM's.
# These values can be overridden in the user .conf file, see the comment at the top of this file.
#AddVMOption -Xms128M
#AddVMOption -Xmx800M
AddVMOption -Xms2048M
AddVMOption -Xmx2048M
AddVMOption -XX:+UseG1GC
AddVMOption -XX:MaxGCPauseMillis=200

Find more on the command line options in this G1GC tutorial.

You can also use the ParNew incombination with the ParOld or ConcMarkSeep collector, as suggested in this blog. But from Java9 onwards G1GC is the default, and I expect that it better fits the behavior of JDeveloper, as in SOASuite and OSB installations.

JRE 1.8.0_171/172 Certified with Oracle EBS 12.1 and 12.2

Steven Chan - Wed, 2018-04-18 12:11

Java logo

Java Runtime Environment 1.8.0_171 (a.k.a. JRE 8u171-b11) and JRE 1.8.0_172 (a.k.a. JRE 8u172-b11) and later updates on the JRE 8 codeline are now certified with Oracle E-Business Suite 12.1 and 12.2 for Windows clients.

Java Web Start is available

This JRE release may be run with either the Java plug-in or Java Web Start.

Java Web Start is certified with EBS 12.1 and 12.2 for Windows clients.  

Considerations if you're also running JRE 1.6 or 1.7

JRE 1.7 and JRE 1.6 updates included an important change: the Java deployment technology (i.e. the JRE browser plugin) is no longer available for those Java releases. It is expected that Java deployment technology will not be packaged in later Java 6 or 7 updates.

JRE 1.7.0_161 (and later 1.7 updates) and 1.6.0_171 (and later 1.6 updates) can still run Java content.  They cannot launch Java.

End-users who only have JRE 1.7 or JRE 1.6 -- and not JRE 1.8 -- installed on their Windows desktop will be unable to launch Java content.

End-users who need to launch JRE 1.7 or 1.6 for compatibility with other third-party Java applications must also install the JRE 1.8.0_152 or later JRE 1.8 updates on their desktops.

Once JRE 1.8.0_152 or later JRE 1.8 updates are installed on a Windows desktop, it can be used to launch JRE 1.7 and JRE 1.6. 

How do I get help with this change?

EBS customers requiring assistance with this change to Java deployment technology can log a Service Request for assistance from the Java Support group.

All JRE 6, 7, and 8 releases are certified with EBS upon release

Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops:

  • From JRE 1.6.0_03 and later updates on the JRE 6 codeline
  • From JRE 1.7.0_10 and later updates on the JRE 7 codeline 
  • From JRE 1.8.0_25 and later updates on the JRE 8 codeline
We test all new JRE releases in parallel with the JRE development process, so all new JRE releases are considered certified with the E-Business Suite on the same day that they're released by our Java team. 

You do not need to wait for a certification announcement before applying new JRE 6, 7, or 8 releases to your EBS users' desktops.

32-bit and 64-bit versions certified

This certification includes both the 32-bit and 64-bit JRE versions for various Windows operating systems. See the respective Recommended Browser documentation for your EBS release for details.

Where are the official patch requirements documented?

All patches required for ensuring full compatibility of the E-Business Suite with JRE 8 are documented in these Notes:

For EBS 12.1 & 12.2

Implications of Java 6 and 7 End of Public Updates for EBS Users

The Oracle Java SE Support Roadmap and Oracle Lifetime Support Policy for Oracle Fusion Middleware documents explain the dates and policies governing Oracle's Java Support.  The client-side Java technology (Java Runtime Environment / JRE) is now referred to as Java SE Deployment Technology in these documents.

Starting with Java 7, Extended Support is not available for Java SE Deployment Technology.  It is more important than ever for you to stay current with new JRE versions.

If you are currently running JRE 6 on your EBS desktops:

  • You can continue to do so until the end of Java SE 6 Deployment Technology Extended Support in June 2017
  • You can obtain JRE 6 updates from My Oracle Support.  See:

If you are currently running JRE 7 on your EBS desktops:

  • You can continue to do so until the end of Java SE 7 Deployment Technology Premier Support in July 2016
  • You can obtain JRE 7 updates from My Oracle Support.  See:

If you are currently running JRE 8 on your EBS desktops:

Will EBS users be forced to upgrade to JRE 8 for Windows desktop clients?

No.

This upgrade is highly recommended but remains optional while Java 6 and 7 are covered by Extended Support. Updates will be delivered via My Oracle Support, where you can continue to receive critical bug fixes and security fixes as well as general maintenance for JRE 6 and 7 desktop clients. Note that there are different impacts of enabling JRE Auto-Update depending on your current JRE release installed, despite the availability of ongoing support for JRE 6 and 7 for EBS customers; see the next section below.

Impact of enabling JRE Auto-Update

Java Auto-Update is a feature that keeps desktops up-to-date with the latest Java release.  The Java Auto-Update feature connects to java.com at a scheduled time and checks to see if there is an update available.

Enabling the JRE Auto-Update feature on desktops with JRE 6 installed will have no effect.

With the release of the January Critical patch Updates, the Java Auto-Update Mechanism will automatically update JRE 7 plug-ins to JRE 8.

Enabling the JRE Auto-Update feature on desktops with JRE 8 installed will apply JRE 8 updates.

Coexistence of multiple JRE releases Windows desktops

The upgrade to JRE 8 is recommended for EBS users, but some users may need to run older versions of JRE 6 or 7 on their Windows desktops for reasons unrelated to the E-Business Suite.

Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 8 will be invoked instead of earlier JRE releases if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290807.1 and 393931.1.

What do Mac users need?

JRE 8 is certified for Mac OS X 10.8 (Mountain Lion), 10.9 (Mavericks), 10.10 (Yosemite), and 10.11 (El Capitan) desktops.  For details, see:

Will EBS users be forced to upgrade to JDK 8 for EBS application tier servers?

No.

JRE is used for desktop clients.  JDK is used for application tier servers.

JRE 8 desktop clients can connect to EBS environments running JDK 6 or 7.

JDK 8 is not certified with the E-Business Suite.  EBS customers should continue to run EBS servers on JDK 6 or 7.

Known Issues

Internet Explorer Performance Issue

Launching JRE 1.8.0_73 through Internet Explorer will have a delay of around 20 seconds before the applet starts to load (Java Console will come up if enabled).

This issue fixed in JRE 1.8.0_74.  Internet Explorer users are recommended to uptake this version of JRE 8.

Form Focus Issue Clicking outside the frame during forms launch may cause a loss of focus when running with JRE 8 and can occur in all Oracle E-Business Suite releases. To fix this issue, apply the following patch:

References

Related Articles
Categories: APPS Blogs

JRE 1.7.0_181 Certified with Oracle E-Business Suite 12.1 and 12.2

Steven Chan - Wed, 2018-04-18 12:10

Java logo

Java Runtime Environment 1.7.0_181 (a.k.a. JRE 7u181-b09) and later updates on the JRE 7 codeline are now certified with Oracle E-Business Suite Release 12.1 and 12.2 for Windows-based desktop clients.

What's new in this update?

This update includes an important change: the Java deployment technology (i.e. the JRE browser plugin) is no longer available as of this Java release. It is expected that Java deployment technology will not be packaged in later Java 7 updates.

JRE 1.7.0_161  and later JRE 1.7 updates can still run Java content.  These releases cannot launch Java.

End-users who only have JRE 1.7.0_161 and later JRE 1.7 updates -- but not JRE 1.8 -- installed on their Windows desktop will be unable to launch Java content.

End-users who need to launch JRE 1.7 for compatibility with other third-party Java applications must also install the October 2017 CPU release JRE 1.8.0_151 or later JRE 1.8 updates on their desktops.

Once JRE 1.8.0_151 or a later JRE 1.8 update is installed on a Windows desktop, it can be used to launch JRE 1.7.0_161 and later updates on the JRE 1.7 codeline. 

How do I get help with this change?

EBS customers requiring assistance with this change to Java deployment technology can log a Service Request for assistance from the Java Support group.

All JRE 6, 7, and 8 releases are certified with EBS upon release

Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops:

  • From JRE 1.6.0_03 and later updates on the JRE 6 codeline
  • From JRE 1.7.0_10 and later updates on the JRE 7 codeline 
  • From JRE 1.8.0_25 and later updates on the JRE 8 codeline
We test all new JRE releases in parallel with the JRE development process, so all new JRE releases are considered certified with the E-Business Suite on the same day that they're released by our Java team. 

You do not need to wait for a certification announcement before applying new JRE 6, 7, or 8 releases to your EBS users' desktops.

Effects of new support dates on Java upgrades for EBS environments

Support dates for the E-Business Suite and Java have changed.  Please review the sections below for more details:

  • What does this mean for Oracle E-Business Suite users?
  • Will EBS users be forced to upgrade to JRE 7 for Windows desktop clients?
  • Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers?

32-bit and 64-bit versions certified

This certification includes both the 32-bit and 64-bit JRE versions for various Windows operating systems. See the respective Recommended Browser documentation for your EBS release for details.

Where are the official patch requirements documented?

How can EBS customers obtain Java 7?

EBS customers can download Java 7 patches from My Oracle Support.  For a complete list of all Java SE patch numbers, see:

Both JDK and JRE packages are now contained in a single combined download.  Download the "JDK" package for both the desktop client JRE and the server-side JDK package. 

Coexistence of multiple JRE releases Windows desktops

The upgrade to JRE 8 is recommended for EBS users, but some users may need to run older versions of JRE 6 or 7 on their Windows desktops for reasons unrelated to the E-Business Suite.

Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 8 will be invoked instead of earlier JRE releases if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290807.1 and 393931.1.

Java Auto-Update Mechanism

With the release of the January 2015 Critical patch Updates, the Java Auto-Update Mechanism will automatically update JRE 7 plug-ins to JRE 8.

Coexistence of multiple JRE releases Windows desktops

The upgrade to JRE 8 is recommended for EBS users, but some users may need to run older versions of JRE 6 or 7 on their Windows desktops for reasons unrelated to the E-Business Suite.

Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 8 will be invoked instead of earlier JRE releases if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290807.1 and 393931.1.

What do Mac users need?

Mac users running Mac OS X 10.7 (Lion), 10.8 (Mountain Lion), 10.9 (Mavericks), and 10.10 (Yosemite) can run JRE 7 or 8 plug-ins.  See:

Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers?

JRE ("Deployment Technology") is used for desktop clients.  JDK is used for application tier servers.

JDK upgrades for E-Business Suite application tier servers are highly recommended but currently remain optional while Java 6 is covered by Extended Support. Updates will be delivered via My Oracle Support, where you can continue to receive critical bug fixes and security fixes as well as general maintenance for JDK 6 for application tier servers. 

Java SE 6 (excluding Deployment Technology) is covered by Extended Support until December 2018.  All EBS customers with application tier servers on Windows, Solaris, and Linux must upgrade to JDK 7 (excluding Deployment Technology) by December 2018. EBS customers running their application tier servers on other operating systems should check with their respective vendors for the support dates for those platforms.

JDK 7 is certified with E-Business Suite 12.  See:

Known Issues

When using Internet Explorer, JRE 1.7.0_01 had a delay of around 20 seconds before the applet started to load. This issue is fixed in JRE 1.7.0_95.

References

Related Articles
Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator