Feed aggregator

Docker-Swarm: One manager, two nodes with Alpine Linux

Dietrich Schroff - Tue, 2017-12-05 16:32
After creating a Alpine Linux VM inside virtualbox and after adding docker because of the small disk footprint (Alpine Linux: 170MB | with docker: 280MB) i performed the following steps to create a docker swarm:
  • cloning the vm twice
  • assigning a static ip to the manager node
  • create new MACs for the network interface cards on the nodes 


Then i followed the tutorial https://docs.docker.com/engine/swarm/swarm-tutorial/create-swarm/ but without running the docker-machine commands, because i have 3 VMs and do not want to run the node on top of docker.

manager:
alpine:~# docker swarm init --advertise-addr 192.168.178.46
Swarm initialized: current node (wy1z8jxmr1cyupdqgkm6lxhe2) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-3b7f69d3wgty0u68oab8724z07fkyvgc0w8j37ng1l7jsmbghl-0yfr1eu5u66z8pinweisltmci 192.168.178.46:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.nodes
#     docker swarm join --token SWMTKN-1-3b7f69d3wgty0u68oab8724z07fkyvgc0w
8j37ng1l7jsmbghl-0yfr1eu5u66z8pinweisltmci 192.168.178.46:2377
This node joined a swarm as a worker.
And then a check on the master:
alpine:~# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
wy1z8jxmr1cyupdqgkm6lxhe2 *   alpine              Ready               Active              Leader
pusf5o5buetjqrsmx3kzusbyt     node01              Ready               Active             
io3z3b6nf8xbzkyzjq6sa7cuc     node02              Ready               Active             
Run a first job:
alpine:~# docker service create --replicas 1 --name helloworld alpine ping 192.168.178.1
rsn6igby4f6d7uuy8eny7sbfb
overall progress: 1 out of 1 tasks
1/1: running  
verify: Service converged
But on my manager i get no output for "docker ps". But this is, because the service is not running here:
alpine:~# docker service ps helloworld
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE           ERROR               PORTS
wrrobalt4oe7        helloworld.1        alpine:latest       node01              Running             Running 2 minutes ago                      
Node 1 shows:
node01:~# docker ps
CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS              PORTS               NAMES
40c5e9b2ffbc        alpine:latest       "ping 192.168.178.1"   3 minutes ago       Up 3 minutes                            helloworld.1.wrrobalt4oe7mrbhxjlweuxgk
If i do a kill on the ping process, it is immediately restarted:
node01:~# ps aux|grep ping
 2457 root       0:00 ping 192.168.178.1
 2597 root       0:00 grep ping
node01:~# kill 2597
node01:~# ps aux|grep ping
 2457 root       0:00 ping 192.168.178.1
 2600 root       0:00 grep ping
A scale up is no problem:
alpine:~# docker service create --replicas 2 --name helloworld alpine ping 192.168.178.1
3lrdqdpjuqml6creswdcqpn2p
overall progress: 2 out of 2 tasks
1/2: running   [==================================================>]
2/2: running   [==================================================>]
verify: Service converged
alpine:~# docker service ps helloworld
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE           ERROR               PORTS
616scw68s8bv        helloworld.1        alpine:latest       node02              Running             Running 8 seconds ago                      
n8ovvsw0m4id        helloworld.2        alpine:latest       node01              Running             Running 8 seconds ago                      
And a shutdown of node02 is no problem:
alpine:~# docker service ps helloworld
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                ERROR               PORTS
bne2enbkabfo        helloworld.1        alpine:latest       alpine              Ready               Ready 2 seconds ago                             
616scw68s8bv         \_ helloworld.1    alpine:latest       node02              Shutdown            Running 17 seconds ago                          
n8ovvsw0m4id        helloworld.2        alpine:latest       node01              Running             Running about a minute ago          


After a switchoff of node01 both service are running on the remaining master:
alpine:~# docker service ps helloworld
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                ERROR               PORTS
bne2enbkabfo        helloworld.1        alpine:latest       alpine              Running             Running about a minute ago                      
616scw68s8bv         \_ helloworld.1    alpine:latest       node02              Shutdown            Running about a minute ago                      
pd8dfp4133yw        helloworld.2        alpine:latest       alpine              Running             Running 2 seconds ago                           
n8ovvsw0m4id         \_ helloworld.2    alpine:latest       node01              Shutdown            Running 2 minutes ago              
So failover is working.
But failback does not occur. After switching on node01 again, the service remains on the manager:
alpine:~# docker service ps helloworld
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE               ERROR                         PORTS
bne2enbkabfo        helloworld.1        alpine:latest       alpine              Running             Running 4 minutes ago                                    
616scw68s8bv         \_ helloworld.1    alpine:latest       node02              Shutdown            Running 4 minutes ago                                    
pd8dfp4133yw        helloworld.2        alpine:latest       alpine              Running             Running 2 minutes ago                                    
n8ovvsw0m4id         \_ helloworld.2    alpine:latest       node01              Shutdown            Failed about a minute ago   "task: non-zero exit (255)"  
alpine:~# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
wy1z8jxmr1cyupdqgkm6lxhe2 *   alpine              Ready               Active              Leader
pusf5o5buetjqrsmx3kzusbyt     node01              Ready               Active             
io3z3b6nf8xbzkyzjq6sa7cuc     node02              Down                Active             


Last thing: How to stop the service?
alpine:~# docker service rm  helloworld
helloworld
alpine:~# docker service ps helloworld
no such service: helloworld
Remaining open points:
  • Is it possible to do a failback or limit the number of a service per node?
  • How to do this with a server application?
    (load balancer needed?)
  • What happens, if the manager fails / is shutdown?

What are the benefits of Manufacturing Dashboards?

Nilesh Jethwa - Tue, 2017-12-05 16:15

Today in the US economy, the major players in the manufacturing industry are electronics, automobile, steel, consumer goods, and telecommunications. And as they offer more advanced products, including tablets and smartphones. These technological advancements significantly influence consumer lifestyles.

Along with these changes, the global manufacturing industry is currently embracing a new key player called metrics based manufacturing. This is actually the latest trend that industries need to consider in their sales funnel. So, what does this mean?

Read more at http://www.infocaptor.com/dashboard/manufacturing-dashboards-what-are-their-benefits

JET Composite Component in ADF Faces UI - Deep Integration

Andrejus Baranovski - Tue, 2017-12-05 12:21
Oracle JET team doesn't recommend or support integrating JET into ADF Faces. This post is based on my own research and doesn't reflect best practices recommended by Oracle. If you want to try the same - do it on your own risk. 

All this said, I still think finding ways of further JET integration into ADF Faces is important. Next step would be to implement editable grid JET based component and integrate it into ADF to improve fast user data entry experience.

Today post focus is around read-only JET composite component integration into ADF Faces. I would recommend to read my previous posts on similar topic, today I'm using methods described in these posts:

1. JET Composite Component - JET 4.1.0 Composite - List Item Action and Defferred Loading

2. JET and ADF integration - Improved JET Rendering in ADF

You can access source code for ADF and JET Composite Component application in my GitHub repository - jetadfcomposite.

Let's start from UI. I have implemented ADF application with regions. One of the regions contains JET Composite. There is ADF Query which sends result into JET Composite. There is integration between JET Composite and ADF - when link is clicked in JET Composite - ADF form is refreshed and it displays current row corresponding to selected item in JET Composite. List on the left is rendered from series of JET components, component implements one list item:


As you can see, there are two type calls:

1. ADF Query sends result and JET Composite. ADF -> JET call
2. JET Composite is forcing ADF form to display row data for selected item. JET -> ADF call

Very important to mention - JET Composite is getting data directly from ADF Bindings, there is no REST layer here. This simplifies JET implementation in ADF Faces significantly.

What is the advantage of using JET Composite in ADF Faces? Answer - improved client side performance. For example, this component allows to expand item. Such action in pure ADF Faces component would produce request to the server. While in JET it happens on the client, since processing is done in JS:


There is no call to the server made when item is expanded:


Out of the box - JET Composite works well with ADF Faces geometry. In this example, JET Composite is located inside ADF Panel Splitter. When Panel Splitter is resized, JET Composite UI is nicely resized too, since it is out of the box responsive. Another advantage of using JET Composite in ADF Faces UI:


When link "Open" is clicked in JET Composite - JS call is made and through ADF Server Listener we update current row in ADF to corresponding data. This shows how we can send events from JET Composite to ADF Faces:


It works to navigate to another region:


And come back - JET content is displayed fine even after ADF Faces PPR was executed (simple trick is required for this to work, see below). If we explore page source, we will see that each JET Composite element is stamped in HTML within ADF Faces HTML structure:


Great thing is - JET Composite which runs in JET, doesnt require any changes to run in ADF Faces. In my example, I only added hidden ID field value to JET Composite, to be able to pass it to ADF to set current row later:


I should give couple of hints regarding infrastructure. It is not convenient to copy JET Composite code directly into ADF application. More convenient is to wrap JET code into JAR and attach it this way to ADF. To achieve that, I would recommend to create empty Web project in JDEV, copy JET Composite code there (into public_html folder) and build JAR out of it:


Put all JET content into JAR:


If Web project is located within main ADF app, make sure to use Working Sets and filter it out to avoid it to be included into EAR during build process:


Now you can add JAR with JET into ADF app:


In order for JET HTML/JS resources to be accessible from JAR file, make sure to add required config into main ADF application web.xml file. Add ADF resources servlet, if it is not added already:


Add servlet mapping, this will allow to load content from ADF JAR library:


To load such resources as JSON, CSS, etc. from ADF JAR, add ADF library filter and list all extensions to be loaded from JAR:


Add FORWARD and REQUEST dispatcher filter mapping for ADF library filter from above:


As I mentioned above, JET Composite is rendered directly with ADF Bindings data, without calling any REST service. This simplifies JET Composite implementation in ADF Faces. It is simply rendered through ADF Faces iterator. JET Composite properties are assigned with ADF Faces EL expressions to get data from ADF Bindings:


JET is not compatible with ADF PPR request/response. If JET content is included into ADF PPR response - context gets corrupted and is not displayed anymore. To overcome this we are re-drawing JET context, if it was included into PPR response. This doesn't reload JET modules, but simply re-draws UI. In my example, ADF Query sends PPR request to the area where JET Composite renders result. I have overridden query listener:


Other methods, where PPR is generated for JET Composite - tab switch and More link, which loads more results. All these actions are overridden to call methods in the bean:


Method reDrawJet is invoked, which calls simple utility method to invoke JS function which actually re-draws JET UI:


JET UI re-draw happens in JS function, which cleans Knockout.JS nodes and reapplies current JET model bindings:


JET -> ADF call is made through JET Composite event. This event is assigned with JS function implemented in ADF Faces context. This allows to call JS located in ADF Faces, without changing JET Composite code. I'm using regular ADF server listener to initiate JS -> server side call:


ADF server listener is attached to generic button in ADF Faces:


ADF server listener does it job and applies received key to set current row in ADF. Which automatically triggers ADF form to display correct data:

NetSuite Adds Three New Partners Seeking to Drive Growth with Cloud ERP

Oracle Press Releases - Tue, 2017-12-05 08:00
Press Release
NetSuite Adds Three New Partners Seeking to Drive Growth with Cloud ERP Apps Associates, BTM Global and iSP3 Expand Existing Oracle Relationships with NetSuite Practices

SAN MATEO, Calif.—Dec 5, 2017

Oracle NetSuite, one of the world’s leading providers of cloud-based financials / ERPHRProfessional Services Automation (PSA) and omnichannel commerce software suites, today announced the addition of new partners to NetSuite’s Solution Provider Program and Alliance Partner Program. Apps Associates, BTM Global and iSP3 are expanding their longtime Oracle partner relationships to add NetSuite practices to their portfolio. Partnering with NetSuite allows them to help organizations in a range of industries improve operational efficiency, business agility, customer focus and data-driven decision-making. As NetSuite partners, the three technology consulting and implementation firms are equipped to meet growing demand for cloud ERP as the pace of business accelerates and organizations look to graduate from standalone applications that can limit the ability to innovate and scale. The new partners benefit with added capacity to grow their areas of expertise and client base, as well as flexibility to offer proprietary add-on solutions and increase both margin and recurring revenue.

“Our three new partners add a diversity of industry focus and expertise with a common commitment to helping clients thrive in the cloud,” said Craig West, Oracle NetSuite Vice President of Alliances and Channels. “We’re delighted to collaborate with them in delivering agile cloud solutions that help our mutual customers scale, innovate and grow.”

Apps Associates, a Platinum Level Member of Oracle PartnerNetwork, Extends Offerings as NetSuite Solution Provider

Apps Associates (www.appsassociates.com), an 800-person global technology and business services firm based in Acton, Mass., is a Platinum level member of the Oracle PartnerNetwork. Apps Associates provides services across the Oracle product line for ERP, HCM, Analytics, Cloud, Integration, Database and other technologies. Founded in 2002, Apps Associates has grown its business across the US, Europe and India with a client base in life sciences, medical devices, manufacturing, distribution, financial services, retail and healthcare. Apps Associates also partners with AWS, Salesforce, Dell Boomi, TIBCO and MuleSoft. In teaming up with NetSuite, Apps Associates furthers its mission of working with customers to enable business improvement by streamlining business processes using advanced technology. The firm will handle NetSuite financials / ERP, HR, PSA, CRM and ecommerce in response to growing customer demand for flexible cloud business management software.

“When NetSuite became part of the Oracle family of products, it was a natural opportunity to extend our offerings and bring a holistic solution to our customers,” said Scott Shelko, NetSuite Practice Manager at Apps Associates. “Apps Associates brings scale and discipline with our NetSuite practice, so our customers have the value of a full lifecycle partner that really understands the platform.”

BTM Global Expands its Oracle Retail Solutions Portfolio as NetSuite Alliance Partner

BTM Global (www.btmglobal.com), headquartered in Minneapolis and Ho Chi Minh City, Vietnam, helps retailers compete in a fast-changing world with systems integration and development services. Its clients range from regional chains to the world’s most recognized brands, including Red Wing Shoes, True Value, World Kitchen and Perry Ellis International. Founded by veterans of RETEK, a software company acquired by Oracle, BTM Global has deep expertise in Oracle Retail solutions that address retailers’ needs around store-based solutions, merchandising, business intelligence and point-of-service systems, notably EMV. 

BTM Global notes that more retailers, especially in the small and midmarket space, are interested in agile, cloud-based solutions that can be rapidly deployed and extended into new markets. BTM Global’s new partnership with NetSuite expands its commitment to providing its clients with a diversity of expertise, resulting in more creative and bolder solutions for retailers’ technology challenges. BTM Global will provide its core services – development, implementation, support and strategic technology planning – to retailers leveraging SuiteCommerce, NetSuite’s ecommerce platform, along with CRM and back-office functionality. BTM Global is also a Gold level member of Oracle PartnerNetwork.

”We’re very happy to be able to offer NetSuite customers our unique, well-rounded expertise for their integration and development needs,” said Tom Schoen, President at BTM Global. “NetSuite offers a proven omnichannel commerce solution on a unified cloud platform with access to real-time data to improve business agility.”

iSP3 Builds on JD Edwards Relationship as a NetSuite Solution Provider

iSP3 (www.isp3.ca), a technology consulting firm that focuses on implementations of Oracle JD Edwards software, is a Platinum level member of Oracle PartnerNetwork that serves companies in the mining, building and construction, oil and gas, and government industries, as well as general business. Founded as a three-person firm in 2001 based in Vancouver, Canada, iSP3 also has offices in Toronto and Seattle. The firm has consultants across North America and Latin America, and has completed more than 100 projects in the Americas as well as Eastern Europe and Asia. In recent years, the 23-person iSP3 has seen rising demand among both prospects and clients for cloud-based ERP. Now, as a NetSuite partner, iSP3 is able to meet that demand with NetSuite’s leading cloud ERP platform. iSP3’s NetSuite practice can address demand for cloud ERP, while providing agility and scale for clients to grow. Beyond ERP, NetSuite’s integrated capabilities for ecommerce, CRM, HR, PSA and other functions also provide iSP3 the opportunity to expand its business into new industries and business processes.

“NetSuite’s cloud-based platform is a very good fit for iSP3 and our clients,” said William Liu, Senior Partner at iSP3. “The ability to deploy NetSuite’s unified system in a short period of time to gain ROI very quickly is extremely attractive.”

Launched in 2002, the NetSuite Solution Provider Program is the industry’s leading cloud channel partner program. Since its inception, NetSuite has been a leader in partner success, breaking new ground in building and executing on the leading model to make the channel successful with NetSuite. A top choice for partners who are building new cloud ERP practices or for those expanding their existing practice to meet the demand for cloud ERP, NetSuite has enabled partners to transform their business model to fully capitalize on the revenue growth opportunity of the cloud. The NetSuite Solution Provider Program delivers unprecedented benefits that include highly attractive margins and range from business planning, sales, marketing and professional services enablement, to training and education. For more information about the NetSuite Solution Provider Program, visit www.netsuite.com/partners.

Contact Info
Christine Allen
Oracle NetSuite
603-743-4534
PR@netsuite.com
About NetSuite Alliance Program

The NetSuite Alliance Partner program provides business transformation consulting services as well as integration and implementation services that help customers get even more value from their NetSuite software. Alliance Partners are experts in their field and have a deep and unique understanding of NetSuite solutions. NetSuite provides Alliance Partners with a robust set of resources, certified training, and tools, enabling them to develop expertise around specific business functions, product areas, and industries so they can efficiently assist customers, differentiate their practices, and grow their business. For more information, please visit http://www.netsuite.com/portal/partners/alliance-partner-program.shtml.

About Oracle NetSuite

Oracle NetSuite pioneered the Cloud Computing revolution in 1998, establishing the world’s first company dedicated to delivering business applications over the internet. Today, it provides a suite of cloud-based financials / Enterprise Resource Planning (ERP), HR and omnichannel commerce software that runs the business of companies in more than 100 countries. For more information, please visit http://www.netsuite.com.

Follow NetSuite’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Christine Allen

  • 603-743-4534

No journal messages available before the last reboot of your CentOS/RHEL system?

Yann Neuhaus - Tue, 2017-12-05 02:20

As you probably noticed RedHat as well as CentOS switched to systemd with version 7 of their operating system release. This also means that instead of looking at /var/log/messages you are supposed to use journcalctl to browse the messages of the operating system. One issue with that is that messages before the last reboot of your system will not be available, which is probably not want you want.

Lets say I started my RedHat linux system just now:

Last login: Tue Dec  5 09:12:34 2017 from 192.168.22.1
[root@rhel7 ~]$ uptime
 09:14:14 up 1 min,  1 user,  load average: 0.33, 0.15, 0.05
[root@rhel7 ~]$ date
Die Dez  5 09:14:15 CET 2017

Asking for any journal logs before that will not show anything:

[root@rhel7 ~]$ journalctl --help  | grep "\-\-since"
  -S --since=DATE          Show entries not older than the specified date
[root@rhel7 ~]$ journalctl --since "2017-12-04 00:00:00"
-- Logs begin at Die 2017-12-05 09:13:07 CET, end at Die 2017-12-05 09:14:38 CET. --
Dez 05 09:13:07 rhel7.localdomain systemd-journal[86]: Runtime journal is using 6.2M (max allowed 49.6M, trying to 
Dez 05 09:13:07 rhel7.localdomain kernel: Initializing cgroup subsys cpuset
Dez 05 09:13:07 rhel7.localdomain kernel: Initializing cgroup subsys cpu
Dez 05 09:13:07 rhel7.localdomain kernel: Initializing cgroup subsys cpuacct

Nothing for yesterday, which is bad. The issue here is the default configuration:

[root@rhel7 ~]$ cat /etc/systemd/journald.conf 
#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See journald.conf(5) for details.

[Journal]
#Storage=auto
#Compress=yes
#Seal=yes
#SplitMode=uid
#SyncIntervalSec=5m
#RateLimitInterval=30s
#RateLimitBurst=1000
#SystemMaxUse=
#SystemKeepFree=
#SystemMaxFileSize=
#RuntimeMaxUse=
#RuntimeKeepFree=
#RuntimeMaxFileSize=
#MaxRetentionSec=
#MaxFileSec=1month
#ForwardToSyslog=yes
#ForwardToKMsg=no
#ForwardToConsole=no
#ForwardToWall=yes
#TTYPath=/dev/console
#MaxLevelStore=debug
#MaxLevelSyslog=debug
#MaxLevelKMsg=notice
#MaxLevelConsole=info
#MaxLevelWall=emerg

“Storage=auto” means that the journal will only be persistent if this directory exists (it does not in the default setup):

[root@rhel7 ~]$ ls /var/log/journal
ls: cannot access /var/log/journal: No such file or directory

As soon as this is created and the service is restarted the journal will be persistent and will survive a reboot:

[root@rhel7 ~]$ mkdir /var/log/journal
[root@rhel7 ~]$ systemctl restart systemd-journald.service
total 4
drwxr-xr-x.  3 root root   46  5. Dez 09:15 .
drwxr-xr-x. 10 root root 4096  5. Dez 09:15 ..
drwxr-xr-x.  2 root root   28  5. Dez 09:15 a473db3bada14e478442d99da55345e0
[root@rhel7 ~]$ ls -al /var/log/journal/a473db3bada14e478442d99da55345e0/
total 8192
drwxr-xr-x. 2 root root      28  5. Dez 09:15 .
drwxr-xr-x. 3 root root      46  5. Dez 09:15 ..
-rw-r-----. 1 root root 8388608  5. Dez 09:15 system.journal

Of course you should look at the other parameters that control the size of journal as well as rotation.

 

Cet article No journal messages available before the last reboot of your CentOS/RHEL system? est apparu en premier sur Blog dbi services.

https://medium.com/oracledevs

Senthil Rajendran - Mon, 2017-12-04 23:02

Oracle Developers

Aggregation of articles from Oracle engineers, Developer Champions, partners, and developer community on all things Oracle Cloud and its technologies.


Oracle Cloud Conference @Bangalore

Senthil Rajendran - Mon, 2017-12-04 21:21

Attending Oracle Cloud Conference @Bangalore 2017

Dynamic Sql to get the value of the column which is formed by concatenating two strings.

Tom Kyte - Mon, 2017-12-04 18:06
Hi Team, I have a query like this I will get the column name at run time something like IF conditions 1 then Column A. IF conditions 2 then Column B. IF conditions 3 then Column C. IF conditions 4 then Column D. Once i get to know whi...
Categories: DBA Blogs

How To Benefit From SEO Audit

Nilesh Jethwa - Mon, 2017-12-04 16:37

Businesses need to capitalize on the growing online market if they want to succeed in modern commerce. The thing about Internet marketing is that there are a number of things that have to be addressed to ensure that sites are performing well and actually exist as assets for companies that use them.

One of the things that businesses online need to ensure is that they run an SEO audit every now and then. What the audit does is give them insights as to how their websites are performing from its current search engine standing to its effectiveness as an online marketing tool.

It’s important that business sites provide information and remain relevant. With the SEO audit, companies can determine which particular components need improvement and which ones are functioning correctly. Everything from content quality to backlinking to indexing is assessed through this process and this is why it’s something that can’t be discounted from the equation.

Unbeknownst to most people, an SEO audit doesn’t only look into the performance of on-page activities. It also assesses any off-page activities that a company might currently be or have engaged in. When it comes to the latter, a good example would be the assessment of the performance, reliability, and value of third-party inbound links.

Read more at https://www.infocaptor.com/dashboard/the-benefits-of-an-seo-audit

Improving Google Crawl Rate Optimization

Nilesh Jethwa - Mon, 2017-12-04 16:12

There are different components that form an SEO strategy, one of which is commonly referred to as crawling, with tools often being called spiders. When a website is published on the Internet, it is indexed by search engines like Google to determine its relevance. The site is then ranked on the search engine with a higher ranking being attributed to a higher visibility potential per primary keyword.

In its indexing process, a search engine must be able to crawl through the website in full, page by page, so that it can determine the site’s digital value. This is why it’s important for all elements of the page to be crawl-able or else there might be pages that the search engine won’t be able to index. As a result, these wont be displayed as relevant results when someone searches for it with a relevant keyword.

Search engines like Google work fast. A website can be crawled and indexed within minutes of publishing. So one of your main goals is to see to it that your site can be crawled by these indexing bots or spiders. In addition, the easier your site is to crawl, the more points the search engine will add to your overall score for ranking.

There are different methods that you can try to optimize your crawl rate and here are some of them:

Read more at https://www.infocaptor.com/dashboard/how-to-improve-google-crawl-rate-optimization

You Don't Want to Miss This Event

PeopleSoft Technology Blog - Mon, 2017-12-04 10:13

If you are a PeopleSoft customer you probably haven't seen this:. 

https://blogs.oracle.com/ebsandoraclecloud/replatforming-your-oracle-applications-to-oracle-cloud-infrastructure:-forthcoming-regional-events

After all, not many of us on the PeopleTools side monitor the EBS Blog!  But in this case, it's worth a look.  What will it tell you?  PeopleTools is going on the road to talk about running your PeopleSoft applications in the cloud. 

What does it mean to run your PeopleSoft applications in the cloud?  It means you take your own version of PeopleSoft and run it on Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) technology provided from the Oracle Cloud Infrastructure.  While that might sound like a lot of work, it's actually made easy by using a product PeopleTools delivers called PeopleSoft Cloud Manager.  You can learn more about that product on the PeopleSoft Information Portal at www.peoplesoftinfo.com

Who should attend this event?  Anyone that is interested in:

  • Learning more about the Oracle Infrastructure and Platform clouds
  • Discovering how to reduce the cost of running PeopleSoft applications
  • Learning more about PeopleSoft Cloud Manager
  • Sitting in on a Q&A session with PeopleTools product strategy (yes, we'll even answer questions about Classic Plus!)
  • A free lunch

Go to the link above for more information on the cities, event details and links for registration.  The events we've done so far have been a great success.  Hope to see you there -

 

Oracle Financial Services Unveils FLEXCUBE V14

Oracle Press Releases - Mon, 2017-12-04 08:00
Press Release
Oracle Financial Services Unveils FLEXCUBE V14 Includes 1,200+ new enhancements designed for a connected banking experience and new blockchain, machine learning adapters

Redwood Shores Calif—Dec 4, 2017

Oracle today announced general availability of the newest release of its flagship core banking application, Oracle FLEXCUBE V14.
 
Oracle FLEXCUBE V14 marks a significant milestone in Oracle’s componentization strategy. Banks now have the choice of either deploying a pre-configured offering for a comprehensive solution or embarking on a progressive transformation journey, one line of business at a time. Oracle now offers banks more choices than ever before to seamlessly integrate best-in-class functionality to their pre-existing architecture with specialized components for originations, collections, pricing, liquidity management, lending and payments.
 
As banks look to further enhance customer relationships by bringing seamless integration between business and financial lifecycles, FLEXCUBE V14 provides the advantage of more than 1,000 API’s to jump start initiatives. Banks using FLEXCUBE V14 have a head start in exploring innovative collaborative options to integrate with corporates, third party service providers, vendors, other banks and networks.
 
"In today's connected world, banks need to seamlessly embed banking services across the lifecycle of a business as well as in the daily activities of the consumer. Banks need to transform their core banking applications to be able to bring in the intrinsic nimbleness of a modern application necessary to respond to this new paradigm,” said Oracle Financial Services Senior Vice President Chet Kamat. “Oracle FLEXCUBE V14 is mission critical for any bank embarking on the path of digital transformation"
 
This release of Oracle FLEXCUBE also features new machine learning and blockchain adapters. The new machine learning adapter unlocks intelligence ingrained in the enterprise to drive process optimization, better decisioning and deliver operational and cost benefits.  Separately, seamless connectivity between Oracle FLEXCUBE and other banks is made possible through FLEXCUBE V14’s blockchain adapter, which enables more fluid straight-through processing and high fidelity information exchange.
 
For the past 40 years, Oracle has connected people and businesses to information with the expressed intent of re-imagining what is possible. FLEXCUBE V14 will continue Oracle’s journey toward providing financial institutions across the globe an opportunity to expand their digital capabilities, rethink ways of doing business and modernize their technology in a considered, efficient manner.
Contact Info
Alex Moriconi
Oracle Corporation
+1-650-607-6598
Alex.Moriconi@Oracle.com
About Oracle
The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.
 
Trademarks
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
 
Safe Harbor
The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation.
Talk to a Press Contact

Alex Moriconi

  • +1-650-607-6598

Announcing The New Open Source WebLogic Monitoring Exporter on GitHub

OTN TechBlog - Mon, 2017-12-04 08:00

As it runs, WebLogic Server generates a rich set of metrics and runtime state information that provides detailed performance and diagnostic data about the servers, clusters, applications, and other resources that are running in a WebLogic domain. To give our users the best possible experience when running WebLogic domains in Docker/Kubernetes environments, we have developed the WebLogic Monitoring Exporter. This new tool exposes WebLogic Server metrics that can be read and collected by monitoring tools such as Prometheus, and displayed in Grafana.

We are also making the WebLogic Monitoring Exporter tool available as open source on GitHub, which will allow our community to contribute to this project and be part of enhancing it. 

The WebLogic Monitoring Exporter is implemented as a web application that is deployed to the WebLogic Server instances that are to be monitored. The exporter uses the WebLogic Server 12.2.1.x RESTful Management Interface for accessing runtime state and metrics.  With a single HTTP query, and no special setup, it provides an easy way to select the metrics that are monitored for a managed server.

For detailed information about the design and implementation of the WebLogic Monitoring Exporter, see Exporting Metrics from WebLogic Server.

Prometheus collects the metrics that have been scraped by the WebLogic Monitoring Exporter. By constructing Prometheus-defined queries, you can generate any data output you require to monitor and diagnose the servers, applications, and resources that are running in your WebLogic domain.

We can use Grafana to display these metrics in graphical form.  Connect Grafana to Prometheus, and create queries that take the metrics scraped by the WebLogic Monitoring Exporter and display them in dashboards.

For more information, see Using Prometheus and Grafana to Monitor WebLogic Server on Kubernetes.

Get Started!

Get started building and deploying the WebLogic Monitoring Exporter, setup Prometheus and Grafana, and monitor the metrics from the WebLogic Managed servers in a domain/cluster running in Kubernetes. 

  • Clone the source code for the WebLogic Monitoring Exporter from GitHub.
  • Build the WebLogic Monitoring Exporter following the steps in the README file.
  • Install both Prometheus and Grafana in the host where you are running Kubernetes.  
  • Start a WebLogic on Kubernetes domain; find a sample in GitHub.
  • Deploy the WebLogic Monitoring Exporter to the cluster where the WebLogic Managed servers are running.
  • Follow the blog entry Using Prometheus and Grafana to Monitor WebLogic Server on Kubernetes that steps you through collecting metrics in Prometheus and display them in Grafana dashboards.

We welcome you to try this out. It's a good start to making the transition to open source monitoring tools.  We can work together to enhance it and take full advantage of its functionality in Docker/Kubernetes environments.

 

Partner webcast - Create new opportunities with Oracle Analytics and Big Data

      Create new opportunities with Oracle Analytics and Big...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Rittman Mead at UKOUG 2017

Rittman Mead Consulting - Mon, 2017-12-04 02:58

For those of you attending the UKOUG this year, we are giving three presentations on OBIEE and Data Visualisation.

Francesco Tisiot has two on Monday:

  • 14.25 // Enabling Self-Service Analytics With Analytic Views & Data Visualization From Cloud to Desktop - Hall 7a
  • 17:55 // OBIEE: Going Down the Rabbit Hole - Hall 7a

Federico Venturin is giving his culinary advice on Wednesday:

  • 11:25 // Visualising Data Like a Top Chef - Hall 6a

And Mike Vickers is diving into BI Publisher, also on Wednesday

  • 15:15 // BI Publisher: Teaching Old Dogs Some New Tricks - Hall 6a

In addition, Sam Jeremiah and I are also around, so if anyone wants to catch up, grab us for a coffee or a beer.

Categories: BI & Warehousing

I dropped a table in oracle but when i saw the indexes became like 'BIN$...' i rebuild them the state is still VALID

Tom Kyte - Sun, 2017-12-03 23:46
I dropped the table with cascade option, after importing the table the indexes are there with BIN$... name and the state is VALID. Are they really valid i try to rebuild its rebuilding but name is not changing.
Categories: DBA Blogs

Keyword Rank Tracking Tools

Nilesh Jethwa - Sun, 2017-12-03 13:34

An important element of search engine optimization (SEO) is choosing the right keyword. With the right keywords, you can make your content rank on search engines. But the work doesn’t stop after ranking, you still need to track the position of your keyword during the search. That way, you can obtain helpful information that will guide you in keeping your SEO efforts successful.

Why Check Keyword Ranking Regularly

One of the main reasons why you need to check your keyword ranking is to identify target keywords. Any SEO professional or blogger should understand how important it is for their content marketing strategies. In fact, a common mistake committed by website administrators and bloggers is writing and publishing articles that don’t target any keywords. It’s like aiming your arrow at something that you are not sure of.

Here are some of the best tools you can take advantage of when tracking your keyword rank:

SEMRUSH. When using this keyword rank tracking tool, it will take 10 to 15 minutes in order to determine which keywords or key phrases to use. Whether you are a webmaster or SEO specialist, this tool will help you analyze data for your clients and website. It also offers useful features such as in-depth reports, keyword grouping, and competitor tracking.

Google Rank Checker. This is a premium online tool that you can use for free. It will help you in tracking keyword positioning while making sure that you appear in search results. To use Google Rank Checker, all you need to do is enter the keywords that you want to check as well as your domain name. After putting in the details, you will now view the keyword rank.

 

Read more at https://www.infocaptor.com/dashboard/best-tools-for-keyword-rank-tracking

Database direct upgrade from 11g to 12c to Exadata X6-2 from X5-2 - RMAN DUPLICATE DATABSAE WITH NOOPEN

Syed Jaffar - Sun, 2017-12-03 08:45
At one of the customers recently, a few production databases migration from X5-2 to X6-2 with Oracle 11g upgrade to Oracle 12cR1 was successfully delivered.

As we all knew, there are handful of methods available to migrate and upgrade an Oracle databases. Our objective was simple. Migrate Oracle 11g (11.2.0.4) database from X5-2 to Oracle 12c (12.1.0.2) on X6-2.

Since the OS on source and target remains the same, no conversion is required.
There was no issue of downtime, so, doesn't required to be worried of various options.

Considering the above, we have decided to go for a direct RMAN (duplicate command) with NOOPEN option to upgrade the database. You can also use same procedure with RMAN restore/recovery method. But, the RMAN duplicate database with NOOPEN simplified the procedure.

Below is he high level procedure to upgrade an Oracle 11g database to 12cR1 using RMAN DUPLICATE command{


  • Copy preupgrd.sql & utluppkg.sql scripts from Oracle 12c ?/rdbms/admin home to 11g to /tmp or any other location;
  • Run preupgrd.sql script on the source database (which is Oracle 11gR2 in our case);
  • Review preupgrade.log and apply any recommendations; You can also execute preupgrade_fixups script to fix the issues raised in the preupgrade.log;
  • Execute RMAN backup (database and archivelog) on source database;
  • scp the backup sets to remote host
  • Create a simple init file on the target with just db_name parameter;
  • Create a password file on the target;
  • Startup nomount the database on the target;
  • Create TNS entries for auxiliary (target) and primary database (source) on the target host;
  • Run the DUPLICATE DATABASE with NOOPEN , with all adjusted / required parameters;
  • After the restore and recovery, open the database with resetlogs upgrade option;
  • Exit from the SQL prompt and run the catupgrade (12c new command) with parallel option;
on target host:

nohup $ORACLE_HOME/perl/bin/perl catctl.pl -n 8 catupgrd.sql &
After completing the catupgrade, get the postupgrade_fixup script from resource and execute on the target database;
Perform the timezone upgrade;

Gather dictionary, fixed_objects, database and system stats accordingly;
Run ultrp.sql to ensure all invalid objects are compiled;
Review the dba_registry t ensure no INVALID components remain;

Above are the high level steps. If you are looking for a step-by-step procedure, review the below url from oraclepublihser.blogspot.com. We must appreciate for the work done at the blogspot for a very comprehensive procedure and demonstration.


References:

http://oraclepublisher.blogspot.com/2014/06/upgrade-database-to-12c-using-rman.html


DOAG 2017: Automation in progress

Yann Neuhaus - Sun, 2017-12-03 05:28

DOAG2017_dbi

A week ago, I had the chance to be speaker at the DOAG Konferenz 2017 in Nürnberg. It’s sometimes hard to find time to be at the conferences because the end of year is quite busy at customers. But it’s also important because it’s time for sharing. I can share what I’m working on about automation and patching and I can also see how other people are doing.

And it was great for me this year, I started to work with Ansible to automate some repetitives tasks, and I saw a lot of interesting presentations either about Ansible itself or where Ansible was used in the demo.

The session “Getting Started with Ansible and Oracle” of Ron Ekins from Pure Storage showed a very interesting use case to see the strengh of Ansible. A live demo where he cloned 1 Production database to 6 different demo environments for the developpers. And doing this way, with a playbook, we are sure that the 6 environments are done without human errors because Ansible will play the same tasks across all nodes.

The previous day, I attended the session “Practical database administration automation with Ansible” of Mikael Sandström and Ilmar Kerm from Kindred Group. They presented some modules they wrote to interact with the database using Ansible. The modules can be used to validate some parameters or create users, etc… I found the code while I was working on my project but I did not dive in the details. The code is available on Github and I will definitively have a closer look.

We can think that Ansible is not designed to manage databases but using modules you can extend Ansible to do a lot of things.

Next week, I have the chance the be also at the Tech17 organised by the UKOUG. Let’s hope I can continue to learn and share!
Speaker_UKOUG2017

 

Cet article DOAG 2017: Automation in progress est apparu en premier sur Blog dbi services.

Goldengate 12.3 Automatic CDR

Michael Dinh - Sat, 2017-12-02 17:51

Automatic Conflict Detection and Resolution

Requirements: GoldenGate 12c (12.3.0.1) and Oracle Database 12c Release 2 (12.2) and later.

Automatic conflict detection and resolution does not require application changes for the following reasons:

  • Oracle Database automatically creates and maintains invisible timestamp columns.
  • Inserts, updates, and deletes use the delete tombstone log table to determine if a row was deleted.
  • LOB column conflicts can be detected.
  • Oracle Database automatically configures supplemental logging on required columns.

I have not had the chance to play with this yet and just only notice the documentation has been updated with details.

 

 


Pages

Subscribe to Oracle FAQ aggregator