Feed aggregator

how to count the number of records in a file

Tom Kyte - Fri, 2017-12-08 16:06
How do i get to count the number of records in a flat file(.csv/.dat) including the header and trailer records.The code was return in a plsql using utl_file.Can you suggest me the best method to implement the logic in Plsql.And the trailer record sho...
Categories: DBA Blogs

advise given by segment advisor 12c

Tom Kyte - Fri, 2017-12-08 16:06
Hi Tom, My question is, Does <b>segment advisor in 12cR1</b>, advise the use of <b>online table redefination for tables</b> in both d<b>ictionary managed</b> and <b>locally managed tablespace</b>? In addition, What about the <b>use of segm...
Categories: DBA Blogs

Dynamic fields in External File

Tom Kyte - Fri, 2017-12-08 16:06
Sorry if its a weird requirement. I have an input feed coming in like the following(only two rows per file).. col1, col2, col3,Attribute_1,Attribute_2,Attribute_3,......,Attribute_n col1, col2, col3,Value_1,Value_2,Value_3,......,Value_n ...
Categories: DBA Blogs

Reconstruct sale from audit commands

Tom Kyte - Fri, 2017-12-08 16:06
We are running on, 12.1 and 12.2. SQL> show parameter audit_trail NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ audit_trail ...
Categories: DBA Blogs

Reg: Elapsed time

Tom Kyte - Fri, 2017-12-08 16:06
Hello Team, Quite confused with elapsed time definition. elapsed time is equal to db time. but DBtime= cpu time + wait time. so its correct or not. somewhere i found elapsed time is nothing but wall time. plz comment on above statem...
Categories: DBA Blogs


Yann Neuhaus - Fri, 2017-12-08 15:40

ukoug_tech17_award_paul_fitton.jpgI’ve received an award for an article I’ve written last year, the Most Read Oracle Scene Article in 2016 – Technology. I like to write for Oracle Scene magazine. It is rare today to have a magazine both available in a paper version, and also publicly available on the web. And I must say that as an author, all the people behind are doing a great work. Thanks to them and thanks for the award. Seeing that what I write helps is the motivation to do so.

The article is: Is your AWR/Statspack Report Relevant.

This is the occasion to link to other articles I’ve written for the UKOUG Magazine. Statspack, because not everybody has Enterprise Edition with optional Diagnostic Pack: Improving Statspack Experience. One on the parameter I hate the most: CBO Choice between Index & Full Scan: the Good, the Bad & the Ugly parameters. Another about the statistics that should replace this awful parameter: Demystifying WORKLOAD System Statistics Gathering.

ukoug_tech17_award_keynoteTwo interesting features in 12c: 12c Online Statistics Gathering & Session Private Statistics. A big change that happened in concerning availability: I/O Error on Datafile: Instance Crash or Datafile Offline?. My SLOB performance tests when ODA Lite came with MVMe SSD: Performance for All Editions on ODA X6-2S/M. And finally, the article on the great Oracle features for physical transport/clone/migration: From Transportable Tablespaces to Pluggable Databases

I’ve stolen a few pictures to illustrate this blog post, from UKOUG photo albums, and from friends.


Pres1The article on Transportable Tablespaces and Pluggable Databases is actually based one one presentation I did. I was in that big Hall 1 [I realize that some may do a joke on this, but I’m talking about the room] where I look like Ant-Man from the back of the room. But doing live demos is great on this huge screen.

The idea for this presentation came 1 year ago when preparing a 5 minutes talk for Oracle Open World 2016 ACED talks (blog post about this) and this subject is so exciting that I wanted to share more about it. Online PDB clone and relocate will be the features you will like the most when going to Multitenant.

pres2 My second presentation was more developer focused, exposing all Join Methods that can magically construct your query result in a record time, or, when bad Join Method is chosen, make your 2-second query still running after one hour. I explained the join methods by 3 ways: theory with .ppt animation, execution plan with dbms_xplan and in live when running queries, with SQL Monitor, showing the different operations and A-Rows increasing in live.

I was also co-presenting in a roundtable on Oracle Cloud PaaS, sharing my little experience on DBaaS. Everybody talks about Cloud and it is good to talk about problems encountered and how to deal with it.

Round tables

Despite the hard concurrency of good sessions, I also attended a few round tables. Those conferences are a good opportunity to meet and share other users and product managers. Oracle is a big company, and we sometimes think that they care only about their biggest customers, but that is not exact. There are several product managers who really listen to customers. A great one was the discussion about something that slowly changes for a few years: the documentation which was really precise in the past – in explaining the When, Why, and How – is now more vague.


Geeking at the Oracle Developer Community lounge, Eating something decent near the conference center, drinking while discussing technical stuff, the ACE dinner (and the first ACED briefing out of US), posing with award winners, and the amazing chocolate testing…



Cet article #UKOUG_TECH17 est apparu en premier sur Blog dbi services.

How do I create a responsive Rich Text Editor in Oracle APEX?

Joel Kallman - Fri, 2017-12-08 13:40
I was in a video call this morning with a great customer from England (and by the way, this customer is in the process of transforming the healthcare industry across the UK).  They asked me a very simple question:

How do I create a responsive Rich Text Editor item on a page?

Simple question and answer, right?  Well, you'd be wrong.  While we pride ourselves on the responsive user interfaces that you can easily create with Oracle Application Express (APEX), unfortunately, the item type of Rich Text Editor is not responsive, out of the box.

So - I did what all smart people do, and I reached out to the Oracle APEX Development team, in this case, the ever-intelligent Carsten Czarski.  And in a few minutes, he showed me exactly what I needed to do.

  1. Open up Application Builder, and in Page Designer, edit the page with the Rich Text Editor item.  In my example, my Rich Text Editor page item name is P3_RESUME.
  2. Navigate to the attributes of the Rich Text Editor item, and in the Advanced section, enter the following code in the "JavaScript Initialization Code" attribute:
    function (o) {
    o.width = $("#P3_RESUME").closest(".t-Form-inputContainer").width() - 5;
    o.height = 300; // Specify your desired item height, in pixels
    return o;
    This code determines the width of the region container of the item, subtracts 5, and returns the object initialized to this size.  This will take care of the Rich Text Editor when the page is initially displayed. But it won't handle the case when the browser is resized. To handle that case, we'll need to add a dynamic action.
  3. Click the Dynamic Actions sub-tab in Page designer (the lightning bolt)
  4. Select Events in the upper-left, right-click your mouse and choose "Create Dynamic Action".
  5. In the attributes, enter "Resize" for Name, and select "Resize" for the Event.
  6. Select the True action of the dynamic action (it should be "Show").  Change the Action to "Execute JavaScript Code".
  7. In the Code attribute, enter the code:
    CKEDITOR.instances.P3_RESUME.resize( $("#P3_RESUME").closest(".t-Form-inputContainer").width() - 5, 300);
    This is an absolute reference to the Rich Text Editor item on the page, named P3_RESUME. And like the code before, this will determine what the width is of the container of the item, subtract 5 from it, and invoke the resize() method of the Rich Text Editor (CK Editor) element.
That's all there is to it!

Obviously, this item type (like all others) should be responsive, out of the box.  And Carsten is looking at this for the next version of APEX.  In the meantime, if you're using Universal Theme with Oracle APEX 5.1, all it takes is a tiny amount of JavaScript to get a responsive Rich Text Editor.

Keyword Rank Tracking Tools

Nilesh Jethwa - Fri, 2017-12-08 13:34

An important element of search engine optimization (SEO) is choosing the right keyword. With the right keywords, you can make your content rank on search engines. But the work doesn’t stop after ranking, you still need to track the position of your keyword during the search. That way, you can obtain helpful information that will guide you in keeping your SEO efforts successful.

Why Check Keyword Ranking Regularly

One of the main reasons why you need to check your keyword ranking is to identify target keywords. Any SEO professional or blogger should understand how important it is for their content marketing strategies. In fact, a common mistake committed by website administrators and bloggers is writing and publishing articles that don’t target any keywords. It’s like aiming your arrow at something that you are not sure of.

Here are some of the best tools you can take advantage of when tracking your keyword rank:

SEMRUSH. When using this keyword rank tracking tool, it will take 10 to 15 minutes in order to determine which keywords or key phrases to use. Whether you are a webmaster or SEO specialist, this tool will help you analyze data for your clients and website. It also offers useful features such as in-depth reports, keyword grouping, and competitor tracking.

Google Rank Checker. This is a premium online tool that you can use for free. It will help you in tracking keyword positioning while making sure that you appear in search results. To use Google Rank Checker, all you need to do is enter the keywords that you want to check as well as your domain name. After putting in the details, you will now view the keyword rank.


Read more at https://www.infocaptor.com/dashboard/best-tools-for-keyword-rank-tracking

Reverse engineer existing Oracle tables to Quick SQL

Dimitri Gielis - Fri, 2017-12-08 13:21
If you didn't hear about Oracle Quick SQL, it's time to read about it as it's something you have without knowing (it's a packaged app in Oracle APEX) and I believe you should start using :)

Quick SQL enables you to rapidly design and prototype data models using a markdown-like shorthand syntax that expands to standards-based Oracle SQL. You can easily create master detail relationships, check constraints, and even generate sample data.
In my blog post Create the Oracle database objects I go over the history how I created database objects and why I think Quick SQL is great and why I use it.

I guess most people typically use Quick SQL at the start of a new project, as it's the quickest way to create your data model and Oracle database objects. That is my primary use case too, but I started to use Quick SQL even on projects where database objects already exist.

In the project I'm currently involved in, the datamodel was generated by another tool, but as we iterate through the project, tables change, columns get renamed and added, row version were requested, triggers need to be made Oracle APEX aware...

Now we could do those changes manually, but I thought it made much more sense to create the data model in Quick SQL and use the features that come with Quick SQL. By clicking a checkbox we can include a Row version, Quick SQL generates the triggers automatically in an APEX aware form, we can generate as much sample data as we want by adding /insert and we can use all the other features that come with Quick SQL. For example when you want to include a history table in the future it's just another checkbox to click.

It's also easy to check-in the Quick SQL script into source control, together with the generated DDL.
If changes need to be done, we can adapt in Quick SQL and generate the DDL again and we see the changes immediately. It would be nice if Quick SQL could generate the ALTER statements too, but that's not the case yet. But it's easy enough to see the changes that were done by comparing the scripts in source control.

If you also want to reverse engineer an existing model into Quick SQL, here's a script that gives you a head start generating the markdown style format.

I tried the script on the Quick SQL data model itself - the result you see below:

Hopefully you see the benefit of using Quick SQL in existing projects too and the script helps you get there. Also Quick SQL gets frequent updates - in the upcoming release (17.3.4), which is already online, you can add a Security Group ID to every table (to make your app multi-tenant) and you can rename the audit columns to your own naming conventions.
Categories: Development

SQL Server Tips: an orphan user owns a database role

Yann Neuhaus - Fri, 2017-12-08 02:15

A few days ago, I conduct an audit to detect all orphan’s windows accounts in a database and I was surprise to have an error during the drop user query.


The first step is to find all orphan’s windows accounts in a database

USE [dbi_database]


/*Step1: Search the orphan user */

SELECT * FROM  sys.database_principals a

LEFT OUTER JOIN sys.server_principals b ON a.sid = b.sid


AND   a.type In ('U', 'G')

AND   a.principal_id > 4


I find the user called “dbi\orphan_user” and run the query to drop it

/*Drop Orphran User*/

DROP USER [dbi\orphan_user]



But as you can see, I receive the error message:

Msg 15421, Level 16, State 1, Line4

“The database principal owns a database role and cannot be dropped.”


This user is owner of database roles…

Be careful it is not this error message:

Msg 15138, Level 16, State 1, Line 4

The database principal owns a schema in the database, and cannot be dropped.

In this case, the user is owner on schema.

Do not confuse these two error messages:

  • Msg 15421 is for database role
  • Msg 15138 is for schema


The goal is to search all database roles owns by the user dbi\orphan_user

/*Search database role onws by this Orphran  user*/

  SELECT dp2.name, dp1.name FROM sys.database_principals AS dp1

                JOIN sys.database_principals AS dp2

                ON dp1.owning_principal_id = dp2.principal_id

                WHERE dp1.type = 'R' AND dp2.name = 'dbi\orphan_user';

As you can see in my select, I use two times the view sys.database_principals to do a cross check between the owning_principal_id and the principal_id.


After that, I change the owner from this role to the good one (by default dbo).

/*Change the owner from these database role*/

ALTER AUTHORIZATION ON ROLE::<database role> TO dbo;


And I drop the orphan user without problems…

/*Drop Orphran User*/

DROP USER [dbi\orphan_user]



To finish, I give you a Santa Klaus Gift:

I also rewrite the query to have the “Alter Authorization” query directly in the SELECT. You have just to copy/paste and execute it

SELECT dp2.name, dp1.name, 'ALTER AUTHORIZATION ON ROLE::' + dp1.name + ' TO dbo;' as query

FROM sys.database_principals AS dp1

JOIN sys.database_principals AS dp2

ON dp1.owning_principal_id = dp2.principal_id

WHERE dp1.type = 'R' AND dp2.name = 'dbi\orphan_user';


Et voila! 8-)


Cet article SQL Server Tips: an orphan user owns a database role est apparu en premier sur Blog dbi services.

Data Guard vs Active Data Guard

Tom Kyte - Thu, 2017-12-07 21:46
Hi, Please could i get some answers for the following: What is the difference between DG and ADG ? What are the benefits of DG ? What are the benefits of ADG ? What are the risks / constraints of using a DR environment for near real ti...
Categories: DBA Blogs

Improving Google Crawl Rate Optimization

Nilesh Jethwa - Thu, 2017-12-07 16:12

There are different components that form an SEO strategy, one of which is commonly referred to as crawling, with tools often being called spiders. When a website is published on the Internet, it is indexed by search engines like Google to determine its relevance. The site is then ranked on the search engine with a higher ranking being attributed to a higher visibility potential per primary keyword.

In its indexing process, a search engine must be able to crawl through the website in full, page by page, so that it can determine the site’s digital value. This is why it’s important for all elements of the page to be crawl-able or else there might be pages that the search engine won’t be able to index. As a result, these wont be displayed as relevant results when someone searches for it with a relevant keyword.

Search engines like Google work fast. A website can be crawled and indexed within minutes of publishing. So one of your main goals is to see to it that your site can be crawled by these indexing bots or spiders. In addition, the easier your site is to crawl, the more points the search engine will add to your overall score for ranking.

There are different methods that you can try to optimize your crawl rate and here are some of them:

Read more at https://www.infocaptor.com/dashboard/how-to-improve-google-crawl-rate-optimization

Errors when downloading a file on page submit in Oracle Application Express 5.1 or later...

Joel Kallman - Thu, 2017-12-07 15:59
Recently, Sharon Kennedy from our team approached me for some help with file download in Oracle Application Express (APEX).  Sharon is the primary developer of Oracle Live SQL (among many of her other responsibilities), and she wanted to initiate a file download in a page process, after page submission.  Since I've done this 100 times in APEX applications, should be easy, right?

Back in 2014, I wrote a short blog post showing how to generate a link to download a file from a BLOB stored in a table.  But this problem was slightly different.  The application flow was:

  1. In Oracle Live SQL Administration, an administrator would click the button "Download Oracle Content"
  2. The page would then be submitted, and a PL/SQL page process would fire, which would query all of the static scripts and tutorials from Live SQL, zip them up using APEX_ZIP, and initiate a file download.

However, when the button was clicked, the page would be submitted, no file download would be initiated, and the following error was displayed on the page:

Error: SyntaxError: Unexpected token r in JSON at position 0

After spending more than an hour debugging the Live SQL application, I resorted to a simpler test case.  I created a trivial application with a button on the first page, which would submit and invoke the PL/SQL page process:

l_file_blob blob;
l_file_name apex_application_files.filename%type;
l_file_mimetype apex_application_files.mime_type%type;
select blob_content, mime_type, filename into l_file_blob , l_file_mimetype , l_file_name from apex_application_files where id = 2928972027711464812;
sys.owa_util.mime_header( l_file_mimetype , false );
sys.htp.p('Content-Disposition: attachment; filename="' || l_file_name ||'"');
sys.htp.p('Content-length: ' || sys.dbms_lob.getlength( l_file_blob ));
sys.wpg_docload.download_file( l_file_blob );
-- Stop page processing
apex_application.stop_apex_engine ;

With my test case, it was exactly the same error encountered, the meaningless error message of "Error: SyntaxError: Unexpected token r in JSON at position 0".

I finally gave up and contacted Patrick Wolf on the APEX product development team, who helped me solve this problem in one minute.  Granted...Patrick was both the creator of the problem and the creator of the solution!

To resolve this problem:

  1. Open the page in Page Designer in Application Builder
  2. Edit the page attributes
  3. In the Advanced section of the page properties on the right hand side of Page Designer, change "Reload on Submit" to "Always" (changing it from "Only for Success" to "Always")
That's it!

Setting "Reload on Submit" to "Always" will POST the page and render the result using the behavior as it was in APEX 5.0 and earlier.  In APEX 5.1, if Reload on Submit is set "Only for Success" (the default), it will use the new optimized page submission process, and expect a specifically formatted JSON result returned from the APEX engine.  Obviously, when I employ a page process which overrides the HTP buffer and emit binary content (instead of a well-formed JSON result), the libraries on the page don't know how to deal with that, and thus, results in this obtuse "Unexpected token r..." message.

Secure Oracle E-Business Suite 12.2 with Allowed Redirects

Steven Chan - Thu, 2017-12-07 13:37

A redirect is an HTTP response status code "302 Found" and is common method for redirecting a URL. Client redirects are a potential attack vector. The Oracle E-Business Suite 12.2.4+ Allowed Redirects feature allows you to define a whitelist of allowed redirects for your Oracle E-Business Suite 12.2 environment. Allowed Redirects is enabled by default with Oracle E-Business Suite 12.2.6.

When the Allowed Redirects feature is enabled, redirects to sites that are not configured in your whitelist are not allowed. This feature provides defense against unknown and potentially damaging sites. This is an example of an attack that the Allowed Redirect feature will prevent if properly configured:

Your users will see an error message if a redirect is blocked by Allowed Redirects:

Note: Allowed Redirects will only block navigation to sites that happen via client redirects. It is not intended to prevent other methods for accessing external sites.

Where can I learn more?

Related Articles


Categories: APPS Blogs

GoldenGate, SOA Admin, OAM & Apps DBA + FREE Training This Week

Online Apps DBA - Thu, 2017-12-07 01:57

  In this Week, you will find: 1. Oracle GoldenGate for DBAs, Apps DBAs and Cloud Migration    1.1 [FREE Live Webinar] Learn Oracle GoldenGate What, Why & How    1.2 Oracle GoldenGate 12c: Troubleshooting using LogDump Utility 2. Concurrent Managers: Overview & Concepts Oracle EBS R12 for Apps DBAs 3. For SOA & FMW Admins: Oracle SOA Suite Administration: […]

The post GoldenGate, SOA Admin, OAM & Apps DBA + FREE Training This Week appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Grouping Id

Tom Kyte - Thu, 2017-12-07 01:06
Hi Tom I never really understood the usage of GROUPING_ID function in OTN.I heard it avoids using multiple GROUPING functions.Can you please illustrate with a small example Thanks
Categories: DBA Blogs

How To Benefit From SEO Audit

Nilesh Jethwa - Wed, 2017-12-06 16:37

Businesses need to capitalize on the growing online market if they want to succeed in modern commerce. The thing about Internet marketing is that there are a number of things that have to be addressed to ensure that sites are performing well and actually exist as assets for companies that use them.

One of the things that businesses online need to ensure is that they run an SEO audit every now and then. What the audit does is give them insights as to how their websites are performing from its current search engine standing to its effectiveness as an online marketing tool.

It’s important that business sites provide information and remain relevant. With the SEO audit, companies can determine which particular components need improvement and which ones are functioning correctly. Everything from content quality to backlinking to indexing is assessed through this process and this is why it’s something that can’t be discounted from the equation.

Unbeknownst to most people, an SEO audit doesn’t only look into the performance of on-page activities. It also assesses any off-page activities that a company might currently be or have engaged in. When it comes to the latter, a good example would be the assessment of the performance, reliability, and value of third-party inbound links.

Read more at https://www.infocaptor.com/dashboard/the-benefits-of-an-seo-audit

Announcing Open Source Jenkins Plugin for Oracle Cloud Infrastructure

OTN TechBlog - Wed, 2017-12-06 15:12

Jenkins is a continuous integration and continuous delivery application that you can use to build and test your software projects continuously. Jenkins OCI Plugin is now available on Github and it allows users to access and manage Oracle Cloud Infrastructure resources from Jenkins. A Jenkins master instance with Jenkins OCI Plugin can spin up slaves (Instances) on demand within the Oracle Cloud Infrastructure, and remove the slaves automatically once the Job completes.

After installing Jenkins OCI Plugin, you can add a OCI Cloud option and a Template with the desired Shape, Image, Domain, etc. The Template will have a Label that you can use in your Jenkins Job. Multiple Templates are supported. The Template options include Labels, Domains, Credentials, Shapes, Images, Slave Limits, and Timeouts.

Below you will find instructions for building and installing the plugin, which is available on GitHub: github.com/oracle/jenkins-oci-plugin

Installing the Jenkins OCI Plugin

The following section covers compiling and installing the Jenkins OCI Plugin.

Plugins required:
  • credentials v2.1.14 or later
  • ssh-slaves v1.6 or later
  • ssh-credentials v1.13 or later
Compile and install OCI Java SDK:

Refer to OCI Java SDK issue 25. Tested with Maven versions 3.3.9 and 3.5.0.

Step 1 – Download plugin $ git clone https://github.com/oracle/oci-java-sdk
$ cd oci-java-sdk
$ mvn compile install Step 2 – Compile the Plugin hpi file $ git clone https://github.com/oracle/jenkins-oci-plugin
$ cd jenkins-oci-plugin
$ mvn compile hpi:hpi

Step 3 – Install hpi

  • Option 1 – Manage Jenkins > Manage Plugins > Click the Advanced tab > Upload Plugin section, click Choose File > Click Upload
  • Option 2 – Copy the downloaded .hpi file into the JENKINS_HOME/plugins directory on the Jenkins master
Restart Jenkins and “OCI Plugin” will be visible in the Installed section of Manage Plugins.

For more information on configuring the Jenkins Plugin for OCI, please refer to the documentation on the GitHub project. And if you have any issues or questions, please feel free to contact the development team by submitting through the Issues tab.

Related content

Benefits Of Transportation and Logistic Dashboards

Nilesh Jethwa - Wed, 2017-12-06 14:45

Transportation and Logistics Dashboard and KPI benefits

Currently, the industry of transportation and logistics is gaining momentum regarding the use of dashboards. Increasingly, non-technical companies involved in this sector are seeing the advantages of implementing data-driven platforms in their marketing goals.

The most particular change seen is the substantial potential as well as interest from small cities to “smart” cities. For instance, startups are co-creating valuable products with transportation and logistics dashboard providers through modern solutions like the TransportBuzz.

Dashboards for Transportation and Logistics

Dashboards or data-driven platforms are a modern way of simplifying how organizations control and manage their access to assets like services and data. Usually, that directs them to the following advantages:

  • More revenue channels.
  • Increased brand awareness and wider reach.
  • External sources that facilitates open innovation.
  • Better operational efficiency.

Read more at http://www.infocaptor.com/dashboard/

How to reduce the size of an LVM partition formatted with xfs filesystem on CentOS7?

Yann Neuhaus - Wed, 2017-12-06 10:17

DISCLAIMER: I know it exists other solutions to do it

– a virtual machine (or not) with CentOS7 installed
– a free disk or partition

I use a VBox machine and I added a 5GiB hard disk

We list the disk and partition to check if our new hard is added.

[root@deploy ~]$ lsblk
sda                          8:0    0   20G  0 disk
├─sda1                       8:1    0    1G  0 part /boot
└─sda2                       8:2    0   19G  0 part
  ├─cl-root                253:0    0   21G  0 lvm  /
  └─cl-swap                253:1    0    2G  0 lvm  [SWAP]
sdb                          8:16   0   10G  0 disk

Good, we can continue..

Let’s partition the disk using fdisk

[root@deploy ~]$ fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x76a98fa2.

Command (m for help): n

[root@deploy ~]$ fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x76a98fa2.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-20971519, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519): +5G
Partition 1 of type Linux and of size 5 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Now, we need to inform the kernel that the partition table has changed. To do that, either we reboot the server or we run partprobe

[root@deploy ~]$ partprobe /dev/sdb1
[root@deploy ~]$

We create a physical volume

[root@deploy ~]$ pvcreate /dev/sdb1
Physical volume "/dev/sdb1" successfully created.
[root@deploy ~]$ pvs
  PV         VG Fmt  Attr PSize  PFree
  /dev/sda2  cl lvm2 a--  19.00g       0
  /dev/sdb1     lvm2 ---   5.00g    5.00g
  /dev/sdc2  cl lvm2 a--   5.00g 1020.00m

We create a volume group

[root@deploy ~]$ vgcreate  vg_deploy /dev/sdb1
  Volume group "vg_deploy" successfully created

We check that the volume group was created properly

[root@deploy ~]$ vgdisplay vg_deploy
  --- Volume group ---
  VG Name               vg_deploy
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               5.00 GiB
  PE Size               4.00 MiB
  Total PE              1279
  Alloc PE / Size       0 / 0
  Free  PE / Size       1279 / 5.00 GiB
  VG UUID               5ZhlvC-lpor-Ti8x-mS9P-bnxW-Gdtw-Gynocl

Here, I set the size of the logical volume with PE (Physical Extent). One PE represents 4.00 MiB

We create a logical volume on our volume group

[root@deploy ~]$ lvcreate -l 1000 -n lv_deploy vg_deploy
  Logical volume "lv_deploy" created.

We have a look to check how our logical volume “lv_deploy” looks like

[root@deploy ~]$ lvdisplay /dev/vg_deploy/lv_deploy
  --- Logical volume ---
  LV Path                /dev/vg_deploy/lv_deploy
  LV Name                lv_deploy
  VG Name                vg_deploy
  LV UUID                2vxcDv-AHfB-7c2x-1PM8-nbn3-38M5-c1QoNS
  LV Write Access        read/write
  LV Creation host, time deploy.example.com, 2017-12-05 08:15:59 -0500
  LV Status              available
  # open                 0
  LV Size                3.91 GiB
  Current LE             1000
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:3

Let’s create our file system on the new logical volume

[root@deploy ~]$ mkfs.xfs  /dev/vg_deploy/lv_deploy
meta-data=/dev/vg_deploy/lv_deploy isize=512    agcount=4, agsize=256000 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=1024000, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

We now create a new directory “mysqldata” for example

[root@deploy ~]$ mkdir /mysqldata

We add the new entry for our new logical volume

[root@deploy ~]$ echo "/dev/mapper/vg_deploy-lv_deploy     /mysqldata      xfs     defaults      0 0" >> /etc/fstab

We mount it

[root@deploy ~]$ mount -a

We check the filesystem is mounted properly

[root@deploy ~]$ df -hT
Filesystem                      Type      Size  Used Avail Use% Mounted on
/dev/mapper/cl-root             xfs        21G  8.7G   13G  42% /
devtmpfs                        devtmpfs  910M     0  910M   0% /dev
tmpfs                           tmpfs     920M     0  920M   0% /dev/shm
tmpfs                           tmpfs     920M  8.4M  912M   1% /run
tmpfs                           tmpfs     920M     0  920M   0% /sys/fs/cgroup
/dev/sda1                       xfs      1014M  227M  788M  23% /boot
tmpfs                           tmpfs     184M     0  184M   0% /run/user/0
/dev/loop2                      iso9660   4.3G  4.3G     0 100% /media/iso
/dev/mapper/vg_deploy-lv_deploy xfs       3.9G   33M  3.9G   1% /mysqldata

We add some files to the /mysqldata directory (a for loop will help us)

[root@deploy mysqldata]$ for i in 1 2 3 4 5; do dd if=/dev/zero  of=/mysqldata/file0$i bs=1024 count=10; done
10+0 records in
10+0 records out
10240 bytes (10 kB) copied, 0.000282978 s, 36.2 MB/s
10+0 records in
10+0 records out
10240 bytes (10 kB) copied, 0.000202232 s, 50.6 MB/s
10+0 records in
10+0 records out
10240 bytes (10 kB) copied, 0.000255617 s, 40.1 MB/s
10+0 records in
10+0 records out
10240 bytes (10 kB) copied, 0.000195752 s, 52.3 MB/s
10+0 records in
10+0 records out
10240 bytes (10 kB) copied, 0.000183672 s, 55.8 MB/s
[root@deploy mysqldata]$ ls -l
total 60
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file01
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file02
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file03
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file04
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file05

NOW the interesting part is coming because we are going to reduce our /mysqldata filesystem
But first let’s make a backup of our current /mysqldata FS

[root@deploy mysqldata]$ yum -y install xfsdump
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile

Bad news! we cannot reduce an xfs partition directly so we need:
– to backup our filesystem
– umount the filsesytem && delete the logical volume
– re-partition tle logical volume with xfs FS
– restore our data

Backup the file system

[root@deploy mysqldata]$ xfsdump -f /tmp/mysqldata.dump /mysqldata
xfsdump: using file dump (drive_simple) strategy
xfsdump: version 3.1.4 (dump format 3.0) - type ^C for status and control

 ============================= dump label dialog ==============================

please enter label for this dump session (timeout in 300 sec)
 -> test
session label entered: "test"

 --------------------------------- end dialog ---------------------------------

xfsdump: level 0 dump of deploy.example.com:/mysqldata
xfsdump: dump date: Tue Dec  5 08:36:20 2017
xfsdump: session id: f010d421-1a34-4c70-871f-48ffc48c29f2
xfsdump: session label: "test"
xfsdump: ino map phase 1: constructing initial dump list
xfsdump: ino map phase 2: skipping (no pruning necessary)
xfsdump: ino map phase 3: skipping (only one dump stream)
xfsdump: ino map construction complete
xfsdump: estimated dump size: 83840 bytes

 ============================= media label dialog =============================

please enter label for media in drive 0 (timeout in 300 sec)
 -> test
media label entered: "test"

 --------------------------------- end dialog ---------------------------------

xfsdump: creating dump session media file 0 (media 0, file 0)
xfsdump: dumping ino map
xfsdump: dumping directories
xfsdump: dumping non-directory files
xfsdump: ending media file
xfsdump: media file size 75656 bytes
xfsdump: dump size (non-dir files) : 51360 bytes
xfsdump: dump complete: 5 seconds elapsed
xfsdump: Dump Summary:
xfsdump:   stream 0 /tmp/mysqldata.dump OK (success)
xfsdump: Dump Status: SUCCESS

Then, we unmount the filesystem and delete the logical volume

[root@deploy ~]$ umount /mysqldata/

[root@deploy ~]$ df -hT
Filesystem          Type      Size  Used Avail Use% Mounted on
/dev/mapper/cl-root xfs        21G  8.7G   13G  42% /
devtmpfs            devtmpfs  910M     0  910M   0% /dev
tmpfs               tmpfs     920M     0  920M   0% /dev/shm
tmpfs               tmpfs     920M  8.4M  912M   1% /run
tmpfs               tmpfs     920M     0  920M   0% /sys/fs/cgroup
/dev/sda1           xfs      1014M  227M  788M  23% /boot
tmpfs               tmpfs     184M     0  184M   0% /run/user/0
/dev/loop2          iso9660   4.3G  4.3G     0 100% /media/iso

[root@deploy ~]$ lvremove /dev/vg_deploy/lv_deploy
Do you really want to remove active logical volume vg_deploy/lv_deploy? [y/n]: y
  Logical volume "lv_deploy" successfully removed

We recreate the logical volume with a lower size (from 1000 PE to 800 PE)

[root@deploy ~]$ lvcreate -l 800 -n lv_deploy vg_deploy
WARNING: xfs signature detected on /dev/vg_deploy/lv_deploy at offset 0. Wipe it? [y/n]: y
  Wiping xfs signature on /dev/vg_deploy/lv_deploy.
  Logical volume "lv_deploy" created.

We build the XFS filesystem

[root@deploy ~]$ mkfs.xfs /dev/mapper/vg_deploy-lv_deploy
meta-data=/dev/mapper/vg_deploy-lv_deploy isize=512    agcount=4, agsize=204800 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=819200, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

We remount the filesystem

[root@deploy ~]$ mount -a
[root@deploy ~]$
[root@deploy ~]$
[root@deploy ~]$ df -hT
Filesystem                      Type      Size  Used Avail Use% Mounted on
/dev/mapper/cl-root             xfs        21G  8.7G   13G  42% /
devtmpfs                        devtmpfs  910M     0  910M   0% /dev
tmpfs                           tmpfs     920M     0  920M   0% /dev/shm
tmpfs                           tmpfs     920M  8.4M  912M   1% /run
tmpfs                           tmpfs     920M     0  920M   0% /sys/fs/cgroup
/dev/sda1                       xfs      1014M  227M  788M  23% /boot
tmpfs                           tmpfs     184M     0  184M   0% /run/user/0
/dev/loop2                      iso9660   4.3G  4.3G     0 100% /media/iso
/dev/mapper/vg_deploy-lv_deploy xfs       3.2G   33M  3.1G   2% /mysqldata

We list the content of /mysqldata directory

[root@deploy ~]$ ls -l /mysqldata
total 0

Let’s restore our data

[root@deploy ~]$ xfsrestore -f /tmp/mysqldata.dump /mysqldata
xfsrestore: using file dump (drive_simple) strategy
xfsrestore: version 3.1.4 (dump format 3.0) - type ^C for status and control
xfsrestore: searching media for dump
xfsrestore: examining media file 0
xfsrestore: dump description:
xfsrestore: hostname: deploy.example.com
xfsrestore: mount point: /mysqldata
xfsrestore: volume: /dev/mapper/vg_deploy-lv_deploy
xfsrestore: session time: Tue Dec  5 08:36:20 2017
xfsrestore: level: 0
xfsrestore: session label: "test"
xfsrestore: media label: "test"
xfsrestore: file system id: 84832e04-e6b8-473a-beb4-f4d59ab9e73c
xfsrestore: session id: f010d421-1a34-4c70-871f-48ffc48c29f2
xfsrestore: media id: 8fda43c1-c7de-4331-b930-ebd88199d0e7
xfsrestore: using online session inventory
xfsrestore: searching media for directory dump
xfsrestore: reading directories
xfsrestore: 1 directories and 5 entries processed
xfsrestore: directory post-processing
xfsrestore: restoring non-directory files
xfsrestore: restore complete: 0 seconds elapsed
xfsrestore: Restore Summary:
xfsrestore:   stream 0 /tmp/mysqldata.dump OK (success)
xfsrestore: Restore Status: SUCCESS

Our data are back

[root@deploy ~]$ ls -l /mysqldata/
total 60
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file01
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file02
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file03
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file04
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file05

Hope this helps :-)


Cet article How to reduce the size of an LVM partition formatted with xfs filesystem on CentOS7? est apparu en premier sur Blog dbi services.


Subscribe to Oracle FAQ aggregator