Feed aggregator

ModPlSql Authentication in Chrome and Firefox

Tom Kyte - Thu, 2016-10-06 22:26
Our application, using mod_plsql authentication, retains the password when accessed from Chrome of Firefox. This is when the user is asked to save the password and clicks "YES". The application is using DAD/logmeoff for de-authenticating the user. Th...
Categories: DBA Blogs

System tablespace migration from dictionary to locally managed in 12c.

Tom Kyte - Thu, 2016-10-06 22:26
Tom, I am an Oracle DBA who has inherited a large production landscape which has been in continuous operation since 1999 and Oracle 8. Back in the days of 10g, the previous DBA performed a migration of all the tablespaces from dictionary to loc...
Categories: DBA Blogs

Cumulative Sum

Tom Kyte - Thu, 2016-10-06 22:26
Hi, Need some expertise here. Im trying to fetch cumulative sum from a transaction table downwards starting from a outstanding balance that I have handy. Below is the output (trying to get running total based on the calcn formula) that Im trying t...
Categories: DBA Blogs

Oracle XMLUPDATE and XMLQUERY namespace usage difference

Tom Kyte - Thu, 2016-10-06 22:26
Hi, I am trying to change the updatexml with xquery way of update, and found the namespace what was used in updatexml cannot be used the same way in the xquery. Kindly let me know the usage mentioned below is valid or not. <u>Test code</u> ...
Categories: DBA Blogs

Remove Duplicate values from table

Tom Kyte - Thu, 2016-10-06 22:26
Hi Team, I have a table (user_details ) which has duplicate values, i need to remove the old values from the table. Please suggest Here is the sample of my user_details table- <code> leg_id update_time login_id 1-100CAFS 20140911153...
Categories: DBA Blogs

Fun with PL/SQL code reviews – Part 1

Yann Neuhaus - Thu, 2016-10-06 10:32

For quite a long time I did not work anymore with PL/SQL and was quite happy when I had the chance to review some code at a customer. The status today: I am not that happy anymore :( Let me explain why and show you some examples on what was discovered. Of course all of the identifiers have been obfuscated and this is not to blame anyone. It is more to make people aware of what you should not do and that you have to be as exact as possible when you do programming. Even more important: Code that you write must be maintainable and understandable by others.

We start with a simple “if-then-else-end if” block:

         IF c <= d
         THEN
            IF c = d
            THEN
               l2 := l2 || l3;
            ELSE
               l2 := l2 || l3 || ',';
            END IF;
         END IF;

So what is wrong with this? The code does what it is supposed to do, true. But this is more complicated than it needs to be. Lets look at the first line:

         IF c <= d

If we enter this “IF” then we know that c is either less than d or equal to d, correct? On line 3 we know that c is d if we enter that “IF”. What does this imply for line 6 (the “ELSE”)?
It implies that c is less than d when we enter the “ELSE”. But then we could also write it like this:

            IF c = d
            THEN
               l2 := l2 || l3;
            ELSIF c < d THEN
               l2 := l2 || l3 || ',';
            END IF;

This is much more clear and less code. We are only interested if c is equal to d or less than d, that’s it. Then we should design the code exactly for that use case.

The next example is about the usage of a record. This is the code block:

         ...

         ll_col1 SCHEMA.TABLE.COLUMN1%TYPE;
         ll_col2 SCHEMA.TABLE.COLUMN2%TYPE;
         ll_col3 SCHEMA.TABLE.COLUMN3%TYPE;
         ll_col4 SCHEMA.TABLE.COLUMN4%TYPE;
         ...
         DECLARE
            TYPE MyRecord IS RECORD
            (
               l_col1    SCHEMA.TABLE.COLUMN1%TYPE,
               l_col2    SCHEMA.TABLE.COLUMN2%TYPE,
               l_col3    SCHEMA.TABLE.COLUMN3%TYPE,
               l_col4    SCHEMA.TABLE.COLUMN4%TYPE
            );
            rec1   MyRecord;
         BEGIN
            v_sql := 'SELECT col1, col2, col3, col4 FROM SCHEMA.TABLE WHERE col3 = '''
               || ll_col3
               || '''';
 
            EXECUTE IMMEDIATE v_sql INTO rec1;

            ll_col1 := rec1.l_col1;
            ll_col2 := rec1.l_col2;
            ll_col3 := rec1.l_col3;
            ll_col4 := rec1.l_col4;
         EXCEPTION
            WHEN NO_DATA_FOUND
            THEN
               ll_col3 := 0;
         END;

Beside that declaring a new block inside an existing block is really ugly what is wrong here? There is the definition of a new record which is then used to fetch the data from the dynamic sql statement. Nothing wrong here. But then the fields of this record are written back to four variables which are valid in and outside of that block, why that? Probably because the record is only valid in the “declare-begin-end” block. But for what do I need the record then at all? Really confusing :)

The next one is more about how easy it is to understand code written by someone else. This is the code block:

      SELECT COUNT (*)
        INTO l_count
        FROM USER_INDEXES
       WHERE INDEX_NAME = '' || l_index_name || '';

      IF l_count > 0
      THEN
         EXECUTE IMMEDIATE 'DROP INDEX ' || '' || l_index_name || '';
      END IF;

I really had to read this carefully until I understood why it was written this way. Probably the developer had this intension: “I want to know if a specific index exists and if yes, then I want to drop it”.
But what he did actually code is: I want to know if there are multiple indexes with the same name. As the query is against user_tables the answer can not be more than 1, correct? But then the correct and much more easy to understand way of writing this would be:

      SELECT COUNT (*)
        INTO l_count
        FROM USER_INDEXES
       WHERE INDEX_NAME = '' || l_index_name || '';

      IF l_count = 1
      THEN
         EXECUTE IMMEDIATE 'DROP INDEX ' || '' || l_index_name || '';
      END IF;

This is just a small change and does not affect the functionality at all but it is much more clear. Even more clear would be something like this:

      BEGIN
        SELECT 'index_exists'
          INTO l_exists
          FROM USER_INDEXES
         WHERE INDEX_NAME = '' || l_index_name || '';
      EXCEPTION
        WHEN NO_DATA_FOUND THEN l_exists := 'index_does_not_exist';
      END;
      IF l_exists = 'index_exists'
      THEN
         EXECUTE IMMEDIATE 'DROP INDEX ' || '' || l_index_name || '';
      END IF;

Using our language we can make the code much more easy to understand. Beside that the concatenation in the where clause and in the execute immediate statement is not required so the final code could look like this:

      BEGIN
        SELECT 'index_exists'
          INTO l_exists
          FROM USER_INDEXES
         WHERE INDEX_NAME = l_index_name;
      EXCEPTION
        WHEN NO_DATA_FOUND THEN l_exists := 'index_does_not_exist';
      END;
      IF l_exists = 'index_exists'
      THEN
         EXECUTE IMMEDIATE 'DROP INDEX ' || l_index_name;
      END IF;

In the next post we’ll look at some other examples which can be improved by changing the way we think about what we code and how others may interpret it.

 

Cet article Fun with PL/SQL code reviews – Part 1 est apparu en premier sur Blog dbi services.

Oracle JET and ADF BC REST Security Integration Approach

Andrejus Baranovski - Thu, 2016-10-06 10:20
I have promised to atendee of my OOW'16 session (Building Enterprise-Grade Mobile Apps with Oracle JET and Cordova [CON5731]) to post a blog about ADF BC REST security and integration with Oracle JET. This post is to demonstrate how we could reuse cookie ID generated by ADF BC REST Web session for REST requests from JET.

First thing first, here you can download source code - jet_adfbc_security.zip. This archive contains ADF BC REST application and JET source code (you need to copy it into your local JET application).

Take a look first into JET login form. This is where we collect username/password and call login function. One important detail - invalidComponentTracker, this allows to report required validation error, when user hits login button with empty username or password:


Here is the login function in JET. If there are no validation errors, it executes POST against ADF BC REST service custom method. In response we could return user info, preferences, etc. This is the only one request where we are using username/password. Key point of this request is to get JSESSIONID from ADF BC REST server, so we could use it for subsequent requests, without sending username/password again. This is similar concept to ADF Faces, it is also using JSESSIONID to track web user and HTTP session on the server. If login is successful, we are reading custom parameter from response with JSESSIONID value. JET router is updated to render different menu structure after login:


Custom response parameter is populated on the server in Filter class. On authentication request this parameter is set once:


ADF BC REST application is enabled with standard ADF Security:


This is how it works. Login form in JET:


Login is successful with redsam/welcome1 user. Two tabs are rendered - Home and People. Home tab displays chart with employees:


We should dive deeper and check what happens with REST communication. POST method in response gets custom parameter with JSESSIONID value, if authentication is successful based on Authorization header parameter:


Chart data in Home tab is retrieved through GET method and this method is not using Authorization header anymore. It calls REST method using JSESSIONID in URL. JESSIONID must be before URL parameters:


Home tab is implemented with JET chart component:


JSESSIONID is included into REST call URL through getURL method, which is referenced by JET collection:


People tab implements table with pagination support:


Same approach is applied in People tab. JSESSIONID is appended into URL through getURL method, before URL parameters:


People UI with paginated table in JET:


REST request URL contains JSESSIONID:

My session workload

Jonathan Lewis - Thu, 2016-10-06 07:19

My old website (www.jlcomp.demon.co.uk) will be disappearing in a couple of weeks – but there are a couple of timeless articles on it that are worth saving and although the popularity of this one has probably been surpassed by Tanel Poder’s Snapper script, or other offerings by Tom Kyte or Adrian Billington, it’s still one of those useful little things to have around – it’s a package to takes a snapshot of your session stats.

The package depends on a view created in the SYS schema, and the package itself has to be installed in the SYS schema – which is why other strategies for collecting the information have become more popular; but if you want to have it handy, here are the two scripts:

rem
rem     Script:         c_mystats.sql
rem     Author:         Jonathan Lewis
rem     Dated:          March 2001
rem     Purpose:        Put names to v$mystat
rem
rem     Last tested
rem             12.1.0.2        -- naming issue
rem             11.2.0.4
rem             10.2.0.5
rem             10.1.0.4
rem              9.2.0.8
rem              8.1.7.4
rem
rem     Notes:
rem     Should be run by SYS - which means it has to be re-run
rem     on a full database export/import
rem
rem     It looks as if it is illegal to create a view with a
rem     name starting with v$ in the sys account as from 12c.
rem     (ORA-00999: invalid view name). Hence the JV$ name.
rem
rem     But you can create a public synonym starting "v$"
rem

create or replace view jv$my_stats
as
select
        /*+
                first_rows
                ordered
        */
        ms.sid,
        sn.statistic#,
        sn.name,
        sn.class,
        ms.value
from
        v$mystat        ms,
        v$statname      sn
where
        sn.statistic# = ms.statistic#
;

drop public synonym v$my_stats;
create public synonym v$my_stats for jv$my_stats;
grant select on v$my_stats to public;

rem
rem	Script:		snap_myst.sql
rem	Author:		Jonathan Lewis
rem	Dated:		March 2001
rem	Purpose:	Package to get snapshot start and delta of v$mystat
rem
rem	Last tested
rem		12.1.0.2
rem		11.2.0.4
rem		10.2.0.5
rem		10.1.0.4
rem		 9.2.0.8
rem		 8.1.7.4
rem
rem	Notes
rem	Has to be run by SYS to create the package
rem	Depends on view (j)v$my_stats (see c_mystats.sql)
rem
rem	Usage:
rem		set serveroutput on size 1000000 format wrapped
rem		set linesize 120
rem		set trimspool on
rem		execute snap_my_stats.start_snap
rem		-- do something
rem		execute snap_my_stats.end_snap
rem

create or replace package snap_my_stats as
	procedure start_snap;
	procedure end_snap (i_limit in number default 0);
end;
/

create or replace package body snap_my_stats as

cursor c1 is
	select 
		statistic#, 
		name,
		value
	from 
		v$my_stats
	where
		value != 0
	;


	type w_type is table of c1%rowtype index by binary_integer;
	w_list		w_type;
	empty_list	w_type;

	m_start_time	date;
	m_start_flag	char(1);
	m_end_time	date;

procedure start_snap is
begin

	m_start_time := sysdate;
	m_start_flag := 'U';
	w_list := empty_list;

	for r in c1 loop
		w_list(r.statistic#).value := r.value;
	end loop;

end start_snap;


procedure end_snap (i_limit in number default 0) 
is
begin

	m_end_time := sysdate;

	dbms_output.put_line('---------------------------------');

	dbms_output.put_line('Session stats - ' ||
				to_char(m_end_time,'dd-Mon hh24:mi:ss')
	);

	if m_start_flag = 'U' then
		dbms_output.put_line('Interval:-  '  || 
				trunc(86400 * (m_end_time - m_start_time)) ||
				' seconds'
		);
	else
		dbms_output.put_line('Since Startup:- ' || 
				to_char(m_start_time,'dd-Mon hh24:mi:ss')
		);
	end if;

	if (i_limit != 0) then
		dbms_output.put_line('Lower limit:-  '  || i_limit);
	end if;

	dbms_output.put_line('---------------------------------');

	dbms_output.put_line(
		rpad('Name',60) ||
		lpad('Value',18)
	);

	dbms_output.put_line(
		rpad('----',60) ||
		lpad('-----',18)
	);

	for r in c1 loop
		if (not w_list.exists(r.statistic#)) then
		    w_list(r.statistic#).value := 0;
		end if;

		if (
		       (r.value > w_list(r.statistic#).value + i_limit)
		) then
			dbms_output.put(rpad(r.name,60));
			dbms_output.put(to_char(
				r.value - w_list(r.statistic#).value,
					'9,999,999,999,990')
			);
			dbms_output.new_line;
		end if;
	end loop;

end end_snap;

begin
	select
		logon_time, 'S'
	into
		m_start_time, m_start_flag
	from
		v$session
	where
		sid = 	(
				select /*+ no_unnest */ sid 
				from v$mystat 
				where rownum = 1
			);

end snap_my_stats;
/

drop public synonym snap_my_stats;
create public synonym snap_my_stats for snap_my_stats;
grant execute on snap_my_stats to public;

One point to be cautious about with this package: do not embed it inside anonymous pl/sql blocks, e.g.

begin

        snap_my_stats.start_snap;

        -- some other code

        snap_my_stats.end_snap;

end;
/

There are some statistics in v$my_stats (v$mystat / v$sesstat) which are not updated until the end of a database call – and calling an anonymous pl/sql block counts as a single database call, so some of your statistics (for example “CPU used by this session”) will report misleading values.


Oracle EMPTY_BLOB Function with Examples

Complete IT Professional - Thu, 2016-10-06 06:00
In this article, I’ll explain what the Oracle EMPTY_BLOB function does and show you an example of how to use it. Purpose of the Oracle EMPTY_BLOB Function The EMPTY_BLOB function is used to initalise a LOB column to EMPTY. It can be used in several places: In an INSERT statement In an UPDATE statement Initialising […]
Categories: Development

How can we add custom code to existing triggers?

Bar Solutions - Thu, 2016-10-06 05:06

Dear Patrick,

We have bought an application that runs on an Oracle database. There are triggers defined on the tables, but we want to add our own code to these triggers, but we don’t have access to the source code. What is the approach we should follow to accomplish this?

Collin Bratforth

Dear Collin,

There are two types of DML triggers. Statement level triggers and row level triggers. Then for these two types there are two triggering moments, before and after. So you get four trigger moments for each DML operation. The statement level triggers fire once for each statement, the row level triggers fire once for each row. This gives you enough moments to add your code. If however you want to add code to a triggering moment already used by the application, you can just add another trigger which will fire at the same moment as the existing one.
This should give you plenty of possibilities to add your own code. Since Oracle 8i you can define multiple triggers at the same firing event, but if you define two or more triggers on the same firing event there is no way to tell which trigger will always fire first. You might think it is based on the creation order or an alphabetical order of the object name, but that is not the case. If the code executed in the triggers is not dependent on each other, then there is no problem, but what if one trigger gets the next value of a sequence and in the other trigger you want to use the current value, you might run into a problem. If the second trigger fires first then the current value of the sequence is either not defined or still holds an old value (previously defined in the session) which is both not correct.

CREATE TABLE trigger_demo
( ID NUMBER
, VALUE VARCHAR2(30)
, first_trigger_value VARCHAR2(30)
, second_trigger_value VARCHAR2(30)
)
/
CREATE SEQUENCE trigger_demo_seq START WITH 1 NOCACHE
/
CREATE OR REPLACE TRIGGER first_trigger
  BEFORE INSERT ON trigger_demo
  FOR EACH ROW
BEGIN
  :new.id                  := trigger_demo_seq.nextval;
  :new.first_trigger_value := 'First Trigger ' || to_char(trigger_demo_seq.currval);
END;
/
CREATE OR REPLACE TRIGGER second_trigger
  BEFORE INSERT ON trigger_demo
  FOR EACH ROW
BEGIN
  :new.second_trigger_value := 'Second Trigger ' || to_char(trigger_demo_seq.currval);
END;
/
INSERT INTO trigger_demo
  (VALUE)
VALUES
  ('Patrick')
/
SELECT *
  FROM trigger_demo
/

If you run this code, chances are about fifty-fifty that you will run into this error:

ORA-08002: sequence TRIGGER_DEMO_SEQ.CURRVAL is not yet defined in this session
ORA-06512: at "SCOTT.SECOND_TRIGGER", line 2
ORA-04088: error during execution of trigger 

The reason is that SCOTT.SECOND_TRIGGER fires before SCOTT.FIRST_TRIGGER fires and trigger_demo_seq.currval is only defined after trigger_demo_seq.nextval has been called which only happens in SCOTT.FIRST_TRIGGER.
Since Oracle 11g you have the possibility to tell the trigger to fire after another trigger. You can do this by adding the FOLLOWS clause to the trigger. This way you can make sure your trigger gets fired after the other one.

CREATE OR REPLACE TRIGGER second_trigger
  BEFORE INSERT ON trigger_demo
  FOR EACH ROW
  FOLLOWS first_trigger
BEGIN
  :new.second_trigger_value := 'Second Trigger ' || to_char(trigger_demo_seq.currval);
END;
/
INSERT INTO trigger_demo
  (VALUE)
VALUES
  ('Patrick')
/
SELECT *
  FROM trigger_demo
/

This way the second_trigger gets fired after the first_trigger. Unfortunately there is no preceding clause for the triggers, so you cannot have triggers get fired before the original code. There is a preceding clause available but this is reserved for cross edition triggers, which are part of Edition Based Redefinition, but that is a whole different subject.
I hope this answers your question.

Happy Oracle’ing,
Patrick Barel

If you have any comments on this subject or you have a question you want answered, please send an email to patrick[at]bar-solutions[dot]com. If I know the answer, or can find it for you, maybe I can help.

This question has been published in OTech Magazine of Fall 2015.

XMLIndex on In-Memory Variables

Tom Kyte - Thu, 2016-10-06 04:06
Hi, I went through the articles on XMLIndex, which will be created on the table - with XMLType columns. If i am processing PL/SQL package - procedure or functions - with XMLType variables passed through different procedures, can i make use of XM...
Categories: DBA Blogs

XMLROOT

Tom Kyte - Thu, 2016-10-06 04:06
Hello Oracle Guruji Thanks for your support in Advance, I have one doubt on XMLROOT, this is my query ops$tkyte@ORA9IR2>select xmlroot(XMLELEMENT("DriverDescription",'$'||'$'||'s partner payout by PL '||' ='||'>' ||' L2 BU'),VERSION '1.0')as emp ...
Categories: DBA Blogs

Oracle bulk collect/ forall for dynamic usage

Tom Kyte - Thu, 2016-10-06 04:06
Hi Tom, Thanks to all the great comments provided in the below link : https://asktom.oracle.com/pls/apex/f?p=100:11:114010771330237::::P11_QUESTION_ID:3675533064673 I am currently looking for a solution to a problem in similar lines. Inste...
Categories: DBA Blogs

How to get Unlocked records from a table

Tom Kyte - Thu, 2016-10-06 04:06
Hi Tom, i am a beginner in Oracle. I want to lock the records while we update in a stored procedure. Meanwhile if any other user trying to fetch the same record which is locked, i should not allow him to fetch that record. As per my knowledge i ca...
Categories: DBA Blogs

datafile block fractured

Tom Kyte - Thu, 2016-10-06 04:06
Hi, Can you pls let me know what is the difference between - select * from v$database_block_corruption; corruption column showing - corruption or fractured What is corruption and what is fractured How to resolve corruption or fra...
Categories: DBA Blogs

View Syntax isn't working

Tom Kyte - Thu, 2016-10-06 04:06
Tom, I have an existing view that works fine. It is used to compare rows in two tables and to display which table has records that the other doesn't. I've been asked to add one more column to this view from a new table. For one reason or ano...
Categories: DBA Blogs

Temporary tables (comparision with SQLServer)

Tom Kyte - Thu, 2016-10-06 04:06
We are migrating some Stored Procedures from Sybase 11 to Oracle 8i. Most of them use temporal tables like: SELECT x, y, z INTO #Temp FROM some_table ... -OR- CREATE TABLE #Temp ( .... ) Where #Temp is a Temporal Table in a temporal area wh...
Categories: DBA Blogs

How to size the log_buffer

Tom Kyte - Thu, 2016-10-06 04:06
Hello In looking into how to tune log_buffer, I came across two different warnings. One claims that if the log buffer is too small, we will get a significant amount of "log buffer space." The solution is to "consider making the log buffer bigger...
Categories: DBA Blogs

Running a Cassandra cluster in a single server

Yann Neuhaus - Thu, 2016-10-06 02:25

This blog post is about the configuration of Apache Cassandra for running a cluster of 3 instances on a single host. Basically C* is not really made to run it in a multi instances environment, but for many cases and reasons you might run a C* cluster on a single server. The principal reason is for tests cases.

Context

The objective is to have a Cassandra ring of 3 instances on 1 host. For that, we will use:

  • apache-cassandra-3.0.9 (community version)
  • Separated disks for a segregation between binaries, data and logs: binaries, data and logs will be in separated mount points
  • Multiple IPs interfaces for each instance (virtual interfaces in our case)
  • Instances name: csd1, csd2, csd3
  • Cluster name: dbitest
Binary installation

Download the binaries from the Apache website, http://www.apache.org/dyn/closer.lua/cassandra/3.0.9/apache-cassandra-3.0.9-bin.tar.gz and install it. In this post we are going to install the 3.0.9 release.

wget http://www.pirbot.com/mirrors/apache/cassandra/3.0.9/apache-cassandra-3.0.9-bin.tar.gz

Extract it:

tar -zxvf apache-cassandra-3.0.9-bin.tar.gz

Place the software into the product directory:

cp –r apache-cassandra-3.0.9 /u00/app/cassandra/product/

 

Instances configuration

First, we will create the admin directory for each instance. The admin directory contains the following sub folders:

  • /u00/app/cassandra/admin/csd$i
    • /backup: soft link to the backup mount point
    • /dump: soft link to the dump mount point
    • /etc: Cassandra configuration files, such as Cassandra.yaml, logback.xml and Cassandra-env.sh.
    • /log: soft link to the log file from the log mount point
    • /pid: contain the pid of the C* process

To create these directories, use the following loop.

for i in 1 2 3 ;
do mkdir -p /u00/app/cassandra/admin/csd$i &&
mkdir /u00/app/cassandra/admin/csd$i/backup &&
mkdir /u00/app/cassandra/admin/csd$i/dump &&
mkdir /u00/app/cassandra/admin/csd$i/etc &&
mkdir /u00/app/cassandra/admin/csd$i/log &&
mkdir /u00/app/cassandra/admin/csd$i/pid &&
mkdir /u00/app/cassandra/admin/csd$i/product;
done

Next copy the C* configuration files into each admin/csd$i/etc directory. Every instance has its own configuration files.

for i in 1 2 3;
do cp -r /u00/app/cassandra/admin/csd$i/product/apache-cassandra-3.0.9/conf/* /u00/app/cassandra/admin/csd$i/etc/;
done

Now, create the three data directories for each instance.

for i in 1 2 3;
do
sudo mkdir -p /u01/cassandradata/csd$i &&
sudo chown -R cassandra:cassandra /u01/cassandradata/csd$i;
done

Configure virtual network interfaces

You must create additional virtual network interfaces for each C* instance, to use a different listen_address and rpc_address for each instance.

sudo ifconfig enp0s3:0 192.168.56.105
sudo ifconfig enp0s3:1 192.168.56.106

Then, you have 3 interfaces (default + 2 virtuals), for your C* instances.

enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.56.104  …

enp0s3:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.56.105  …

enp0s3:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.56.106  …

enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        …

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
…

As all the required files are in place, we can now change the 3 main configuration files, to set up the three instances (csd1, csd2 and csd3).

cassandra-env.sh:

Use a different “JMX_PORT” for each node. For instance: 7199 for csd1, 7299 for csd2 and 7399 for csd3.

JMX will bind to the local host IP by default, you can use the default.

cassandra.yaml

The central configuration file of Apache Cassandra. Change the following parameters:

  • cluster_name: ‘dbitest’ All instances must have the same cluster name
  • commitlog_directory: /u01/cassandradata/csd$i/commitlog
  • data_file_directories:/u01/cassandradata/csd$i/data
  • saved_caches_directory: /u01/cassandradata/csd$i/saved_caches
  • listen_address: 192.168.56.104
  • rpc_address: 192.168.56.104
  • seeds: specify the list of IP of the seed nodes

logback.xml

The configuration file for C* logs. Make the following changes:

<file>${cassandra.logdir}/system.log</file> to <file>/u00/app/cassandra/admin/csd$i/log/system.log</file>

<fileNamePattern>${cassandra.logdir}/system.log.%i.zip</fileNamePattern> to 
<fileNamePattern>/u00/app/cassandra/admin/csd$i/log/system.log.%i.zip</fileNamePattern>

<file>${cassandra.logdir}/debug.log</file> to <file>/u00/app/cassandra/admin/csd$i/debug.log</file>

<fileNamePattern>${cassandra.logdir}/debug.log.%i.zip</fileNamePattern> to
<fileNamePattern>/u00/app/cassandra/admin/csd1/log/debug.log.%i.zip</fileNamePattern>

Make the change for the three instances.

Starting the nodes

Before starting each node, you must dynamically set the environment variables for each of them.

$CASSANDRA_HOME=/u00/app/cassandra/product/apache-cassandra-3.0.9
$CASSANDRA_CONF=/u00/app/cassandra/admin/csd$i/etc/

After, setting the variables for each node you can start Apache Cassandra by command-line:

./$CASSANDRA_HOME/bin/cassandra –f

Execute the same command in a separated shell, for each node. Do not forget to set the environment variables $CASSANDRA_HOME and $CASSANDRA_CONF.

Cluster status

Verify the cluster status with nodetool utility. Check if all nodes are up and running.

[cassandra@test bin]$ ./nodetool status
 Datacenter: datacenter1
 =======================
 Status=Up/Down|/ State=Normal/Leaving/Joining/Moving
 --  Address         Load       Tokens       Owns (effective)  Host ID                               Rack
 UN  192.168.56.104  263.42 KB  256          64.5%             cddee7f4-c0d5-4cba-9ddc-b773a08a9245  rack1
 UN  192.168.56.105  212.19 KB  256          67.5%             19cd17d2-1aeb-48e5-9299-7a0282c2e92e  rack1
 UN  192.168.56.106  147.54 KB  256          68.0%             f65d5b92-ed3b-4f68-b93d-72e6162eafca  rack1

 

Enjoy ;-)

 

Cet article Running a Cassandra cluster in a single server est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator