Feed aggregator

How To Benefit From SEO Audit

Nilesh Jethwa - Mon, 2017-12-04 16:37

Businesses need to capitalize on the growing online market if they want to succeed in modern commerce. The thing about Internet marketing is that there are a number of things that have to be addressed to ensure that sites are performing well and actually exist as assets for companies that use them.

One of the things that businesses online need to ensure is that they run an SEO audit every now and then. What the audit does is give them insights as to how their websites are performing from its current search engine standing to its effectiveness as an online marketing tool.

It’s important that business sites provide information and remain relevant. With the SEO audit, companies can determine which particular components need improvement and which ones are functioning correctly. Everything from content quality to backlinking to indexing is assessed through this process and this is why it’s something that can’t be discounted from the equation.

Unbeknownst to most people, an SEO audit doesn’t only look into the performance of on-page activities. It also assesses any off-page activities that a company might currently be or have engaged in. When it comes to the latter, a good example would be the assessment of the performance, reliability, and value of third-party inbound links.

Read more at https://www.infocaptor.com/dashboard/the-benefits-of-an-seo-audit

Improving Google Crawl Rate Optimization

Nilesh Jethwa - Mon, 2017-12-04 16:12

There are different components that form an SEO strategy, one of which is commonly referred to as crawling, with tools often being called spiders. When a website is published on the Internet, it is indexed by search engines like Google to determine its relevance. The site is then ranked on the search engine with a higher ranking being attributed to a higher visibility potential per primary keyword.

In its indexing process, a search engine must be able to crawl through the website in full, page by page, so that it can determine the site’s digital value. This is why it’s important for all elements of the page to be crawl-able or else there might be pages that the search engine won’t be able to index. As a result, these wont be displayed as relevant results when someone searches for it with a relevant keyword.

Search engines like Google work fast. A website can be crawled and indexed within minutes of publishing. So one of your main goals is to see to it that your site can be crawled by these indexing bots or spiders. In addition, the easier your site is to crawl, the more points the search engine will add to your overall score for ranking.

There are different methods that you can try to optimize your crawl rate and here are some of them:

Read more at https://www.infocaptor.com/dashboard/how-to-improve-google-crawl-rate-optimization

You Don't Want to Miss This Event

PeopleSoft Technology Blog - Mon, 2017-12-04 10:13

If you are a PeopleSoft customer you probably haven't seen this:. 

https://blogs.oracle.com/ebsandoraclecloud/replatforming-your-oracle-applications-to-oracle-cloud-infrastructure:-forthcoming-regional-events

After all, not many of us on the PeopleTools side monitor the EBS Blog!  But in this case, it's worth a look.  What will it tell you?  PeopleTools is going on the road to talk about running your PeopleSoft applications in the cloud. 

What does it mean to run your PeopleSoft applications in the cloud?  It means you take your own version of PeopleSoft and run it on Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) technology provided from the Oracle Cloud Infrastructure.  While that might sound like a lot of work, it's actually made easy by using a product PeopleTools delivers called PeopleSoft Cloud Manager.  You can learn more about that product on the PeopleSoft Information Portal at www.peoplesoftinfo.com

Who should attend this event?  Anyone that is interested in:

  • Learning more about the Oracle Infrastructure and Platform clouds
  • Discovering how to reduce the cost of running PeopleSoft applications
  • Learning more about PeopleSoft Cloud Manager
  • Sitting in on a Q&A session with PeopleTools product strategy (yes, we'll even answer questions about Classic Plus!)
  • A free lunch

Go to the link above for more information on the cities, event details and links for registration.  The events we've done so far have been a great success.  Hope to see you there -

 

Oracle Financial Services Unveils FLEXCUBE V14

Oracle Press Releases - Mon, 2017-12-04 08:00
Press Release
Oracle Financial Services Unveils FLEXCUBE V14 Includes 1,200+ new enhancements designed for a connected banking experience and new blockchain, machine learning adapters

Redwood Shores Calif—Dec 4, 2017

Oracle today announced general availability of the newest release of its flagship core banking application, Oracle FLEXCUBE V14.
 
Oracle FLEXCUBE V14 marks a significant milestone in Oracle’s componentization strategy. Banks now have the choice of either deploying a pre-configured offering for a comprehensive solution or embarking on a progressive transformation journey, one line of business at a time. Oracle now offers banks more choices than ever before to seamlessly integrate best-in-class functionality to their pre-existing architecture with specialized components for originations, collections, pricing, liquidity management, lending and payments.
 
As banks look to further enhance customer relationships by bringing seamless integration between business and financial lifecycles, FLEXCUBE V14 provides the advantage of more than 1,000 API’s to jump start initiatives. Banks using FLEXCUBE V14 have a head start in exploring innovative collaborative options to integrate with corporates, third party service providers, vendors, other banks and networks.
 
"In today's connected world, banks need to seamlessly embed banking services across the lifecycle of a business as well as in the daily activities of the consumer. Banks need to transform their core banking applications to be able to bring in the intrinsic nimbleness of a modern application necessary to respond to this new paradigm,” said Oracle Financial Services Senior Vice President Chet Kamat. “Oracle FLEXCUBE V14 is mission critical for any bank embarking on the path of digital transformation"
 
This release of Oracle FLEXCUBE also features new machine learning and blockchain adapters. The new machine learning adapter unlocks intelligence ingrained in the enterprise to drive process optimization, better decisioning and deliver operational and cost benefits.  Separately, seamless connectivity between Oracle FLEXCUBE and other banks is made possible through FLEXCUBE V14’s blockchain adapter, which enables more fluid straight-through processing and high fidelity information exchange.
 
For the past 40 years, Oracle has connected people and businesses to information with the expressed intent of re-imagining what is possible. FLEXCUBE V14 will continue Oracle’s journey toward providing financial institutions across the globe an opportunity to expand their digital capabilities, rethink ways of doing business and modernize their technology in a considered, efficient manner.
Contact Info
Alex Moriconi
Oracle Corporation
+1-650-607-6598
Alex.Moriconi@Oracle.com
About Oracle
The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.
 
Trademarks
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
 
Safe Harbor
The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation.
Talk to a Press Contact

Alex Moriconi

  • +1-650-607-6598

Announcing The New Open Source WebLogic Monitoring Exporter on GitHub

OTN TechBlog - Mon, 2017-12-04 08:00

As it runs, WebLogic Server generates a rich set of metrics and runtime state information that provides detailed performance and diagnostic data about the servers, clusters, applications, and other resources that are running in a WebLogic domain. To give our users the best possible experience when running WebLogic domains in Docker/Kubernetes environments, we have developed the WebLogic Monitoring Exporter. This new tool exposes WebLogic Server metrics that can be read and collected by monitoring tools such as Prometheus, and displayed in Grafana.

We are also making the WebLogic Monitoring Exporter tool available as open source on GitHub, which will allow our community to contribute to this project and be part of enhancing it. 

The WebLogic Monitoring Exporter is implemented as a web application that is deployed to the WebLogic Server instances that are to be monitored. The exporter uses the WebLogic Server 12.2.1.x RESTful Management Interface for accessing runtime state and metrics.  With a single HTTP query, and no special setup, it provides an easy way to select the metrics that are monitored for a managed server.

For detailed information about the design and implementation of the WebLogic Monitoring Exporter, see Exporting Metrics from WebLogic Server.

Prometheus collects the metrics that have been scraped by the WebLogic Monitoring Exporter. By constructing Prometheus-defined queries, you can generate any data output you require to monitor and diagnose the servers, applications, and resources that are running in your WebLogic domain.

We can use Grafana to display these metrics in graphical form.  Connect Grafana to Prometheus, and create queries that take the metrics scraped by the WebLogic Monitoring Exporter and display them in dashboards.

For more information, see Using Prometheus and Grafana to Monitor WebLogic Server on Kubernetes.

Get Started!

Get started building and deploying the WebLogic Monitoring Exporter, setup Prometheus and Grafana, and monitor the metrics from the WebLogic Managed servers in a domain/cluster running in Kubernetes. 

  • Clone the source code for the WebLogic Monitoring Exporter from GitHub.
  • Build the WebLogic Monitoring Exporter following the steps in the README file.
  • Install both Prometheus and Grafana in the host where you are running Kubernetes.  
  • Start a WebLogic on Kubernetes domain; find a sample in GitHub.
  • Deploy the WebLogic Monitoring Exporter to the cluster where the WebLogic Managed servers are running.
  • Follow the blog entry Using Prometheus and Grafana to Monitor WebLogic Server on Kubernetes that steps you through collecting metrics in Prometheus and display them in Grafana dashboards.

We welcome you to try this out. It's a good start to making the transition to open source monitoring tools.  We can work together to enhance it and take full advantage of its functionality in Docker/Kubernetes environments.

 

Partner webcast - Create new opportunities with Oracle Analytics and Big Data

      Create new opportunities with Oracle Analytics and Big...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Rittman Mead at UKOUG 2017

Rittman Mead Consulting - Mon, 2017-12-04 02:58

For those of you attending the UKOUG this year, we are giving three presentations on OBIEE and Data Visualisation.

Francesco Tisiot has two on Monday:

  • 14.25 // Enabling Self-Service Analytics With Analytic Views & Data Visualization From Cloud to Desktop - Hall 7a
  • 17:55 // OBIEE: Going Down the Rabbit Hole - Hall 7a

Federico Venturin is giving his culinary advice on Wednesday:

  • 11:25 // Visualising Data Like a Top Chef - Hall 6a

And Mike Vickers is diving into BI Publisher, also on Wednesday

  • 15:15 // BI Publisher: Teaching Old Dogs Some New Tricks - Hall 6a

In addition, Sam Jeremiah and I are also around, so if anyone wants to catch up, grab us for a coffee or a beer.

Categories: BI & Warehousing

I dropped a table in oracle but when i saw the indexes became like 'BIN$...' i rebuild them the state is still VALID

Tom Kyte - Sun, 2017-12-03 23:46
I dropped the table with cascade option, after importing the table the indexes are there with BIN$... name and the state is VALID. Are they really valid i try to rebuild its rebuilding but name is not changing.
Categories: DBA Blogs

Keyword Rank Tracking Tools

Nilesh Jethwa - Sun, 2017-12-03 13:34

An important element of search engine optimization (SEO) is choosing the right keyword. With the right keywords, you can make your content rank on search engines. But the work doesn’t stop after ranking, you still need to track the position of your keyword during the search. That way, you can obtain helpful information that will guide you in keeping your SEO efforts successful.

Why Check Keyword Ranking Regularly

One of the main reasons why you need to check your keyword ranking is to identify target keywords. Any SEO professional or blogger should understand how important it is for their content marketing strategies. In fact, a common mistake committed by website administrators and bloggers is writing and publishing articles that don’t target any keywords. It’s like aiming your arrow at something that you are not sure of.

Here are some of the best tools you can take advantage of when tracking your keyword rank:

SEMRUSH. When using this keyword rank tracking tool, it will take 10 to 15 minutes in order to determine which keywords or key phrases to use. Whether you are a webmaster or SEO specialist, this tool will help you analyze data for your clients and website. It also offers useful features such as in-depth reports, keyword grouping, and competitor tracking.

Google Rank Checker. This is a premium online tool that you can use for free. It will help you in tracking keyword positioning while making sure that you appear in search results. To use Google Rank Checker, all you need to do is enter the keywords that you want to check as well as your domain name. After putting in the details, you will now view the keyword rank.

 

Read more at https://www.infocaptor.com/dashboard/best-tools-for-keyword-rank-tracking

Database direct upgrade from 11g to 12c to Exadata X6-2 from X5-2 - RMAN DUPLICATE DATABSAE WITH NOOPEN

Syed Jaffar - Sun, 2017-12-03 08:45
At one of the customers recently, a few production databases migration from X5-2 to X6-2 with Oracle 11g upgrade to Oracle 12cR1 was successfully delivered.

As we all knew, there are handful of methods available to migrate and upgrade an Oracle databases. Our objective was simple. Migrate Oracle 11g (11.2.0.4) database from X5-2 to Oracle 12c (12.1.0.2) on X6-2.

Since the OS on source and target remains the same, no conversion is required.
There was no issue of downtime, so, doesn't required to be worried of various options.

Considering the above, we have decided to go for a direct RMAN (duplicate command) with NOOPEN option to upgrade the database. You can also use same procedure with RMAN restore/recovery method. But, the RMAN duplicate database with NOOPEN simplified the procedure.

Below is he high level procedure to upgrade an Oracle 11g database to 12cR1 using RMAN DUPLICATE command{


  • Copy preupgrd.sql & utluppkg.sql scripts from Oracle 12c ?/rdbms/admin home to 11g to /tmp or any other location;
  • Run preupgrd.sql script on the source database (which is Oracle 11gR2 in our case);
  • Review preupgrade.log and apply any recommendations; You can also execute preupgrade_fixups script to fix the issues raised in the preupgrade.log;
  • Execute RMAN backup (database and archivelog) on source database;
  • scp the backup sets to remote host
  • Create a simple init file on the target with just db_name parameter;
  • Create a password file on the target;
  • Startup nomount the database on the target;
  • Create TNS entries for auxiliary (target) and primary database (source) on the target host;
  • Run the DUPLICATE DATABASE with NOOPEN , with all adjusted / required parameters;
  • After the restore and recovery, open the database with resetlogs upgrade option;
  • Exit from the SQL prompt and run the catupgrade (12c new command) with parallel option;
on target host:

nohup $ORACLE_HOME/perl/bin/perl catctl.pl -n 8 catupgrd.sql &
After completing the catupgrade, get the postupgrade_fixup script from resource and execute on the target database;
Perform the timezone upgrade;

Gather dictionary, fixed_objects, database and system stats accordingly;
Run ultrp.sql to ensure all invalid objects are compiled;
Review the dba_registry t ensure no INVALID components remain;

Above are the high level steps. If you are looking for a step-by-step procedure, review the below url from oraclepublihser.blogspot.com. We must appreciate for the work done at the blogspot for a very comprehensive procedure and demonstration.


References:

http://oraclepublisher.blogspot.com/2014/06/upgrade-database-to-12c-using-rman.html


DOAG 2017: Automation in progress

Yann Neuhaus - Sun, 2017-12-03 05:28

DOAG2017_dbi

A week ago, I had the chance to be speaker at the DOAG Konferenz 2017 in Nürnberg. It’s sometimes hard to find time to be at the conferences because the end of year is quite busy at customers. But it’s also important because it’s time for sharing. I can share what I’m working on about automation and patching and I can also see how other people are doing.

And it was great for me this year, I started to work with Ansible to automate some repetitives tasks, and I saw a lot of interesting presentations either about Ansible itself or where Ansible was used in the demo.

The session “Getting Started with Ansible and Oracle” of Ron Ekins from Pure Storage showed a very interesting use case to see the strengh of Ansible. A live demo where he cloned 1 Production database to 6 different demo environments for the developpers. And doing this way, with a playbook, we are sure that the 6 environments are done without human errors because Ansible will play the same tasks across all nodes.

The previous day, I attended the session “Practical database administration automation with Ansible” of Mikael Sandström and Ilmar Kerm from Kindred Group. They presented some modules they wrote to interact with the database using Ansible. The modules can be used to validate some parameters or create users, etc… I found the code while I was working on my project but I did not dive in the details. The code is available on Github and I will definitively have a closer look.

We can think that Ansible is not designed to manage databases but using modules you can extend Ansible to do a lot of things.

Next week, I have the chance the be also at the Tech17 organised by the UKOUG. Let’s hope I can continue to learn and share!
Speaker_UKOUG2017

 

Cet article DOAG 2017: Automation in progress est apparu en premier sur Blog dbi services.

Goldengate 12.3 Automatic CDR

Michael Dinh - Sat, 2017-12-02 17:51

Automatic Conflict Detection and Resolution

Requirements: GoldenGate 12c (12.3.0.1) and Oracle Database 12c Release 2 (12.2) and later.

Automatic conflict detection and resolution does not require application changes for the following reasons:

  • Oracle Database automatically creates and maintains invisible timestamp columns.
  • Inserts, updates, and deletes use the delete tombstone log table to determine if a row was deleted.
  • LOB column conflicts can be detected.
  • Oracle Database automatically configures supplemental logging on required columns.

I have not had the chance to play with this yet and just only notice the documentation has been updated with details.

 

 


Steps For Optimizing Your Website

Nilesh Jethwa - Sat, 2017-12-02 16:57

People use search engines like Google when looking for products or brands these days. In fact, 60 percent of consumers take advantage of Google search just to find what they exactly want, and more than 80 percent of online search results lead to in-store visits as well as sales. So if you want to receive massive traffic and increase your sales, optimizing your website is the perfect solution.

Optimized Website, A Business Owner’s Priority

Search engine optimization or SEO is not that easy to implement. That is because there are constant changes that you need to keep up with such as the algorithms of search engines. For instance, the search algorithm of Google constantly evolves due to their goal of providing searchers with best results they deserve.

It’s not enough to hire a SEO professional to do the job today and stop search optimizing in the next months or years. Experts understand that this is a non-stop process. Your website needs to deal with the changing algorithms, and as long as it is evolving, your website also needs to keep up. This is why an optimized website should be a priority for any business owner across the globe.

Read more at  https://www.infocaptor.com/dashboard/what-are-the-steps-to-optimizing-your-website

Oracle Managed Kubernetes Cloud– First Steps with Automated Deployment using Wercker Pipelines

Amis Blog - Sat, 2017-12-02 07:37

imageOracle announced a managed Kubernetes Cloud service during Oracle OpenWorld 2017. This week, I had an opportunity to work with this new container native cloud offering. It is quite straightforward:

Through the Wercker console

image

a new Cluster can be created on a Oracle BareMetal Cloud (aka Oracle Cloud Infrastructure) environment. The cloud credentials are provided

SNAGHTMLff83abb

Name and K8S version are specified:

image

The Cluster Size is configured:

image

And the node configuration is indicated:

image

Subsequently, Oracle will rollout a Kubernetes cluster to the designated Cloud Infrastructure – according to these specifications.

SNAGHTML1010ac4c

The Cluster’s Address us highlighted in this screenshot. This endpoint will be required later on to configure the automated deployment pipeline.

This cluster can be managed through the Kubernetes Dashboard. Deployments to the cluster can be done using the normal means – such as the kubectl command line tool. Oracle recommends automating all deployments, using the Wercker pipelines. I will illustrate how that is done in this article.

The source code can be found on GitHub: https://github.com/lucasjellema/the-simple-app. Be warned – the code is extremely simple.

The steps are: (assuming one already has a GitHub account as well as a Wercker account and a local kubectl installation)

  1. generate a personal token in the Wercker account (to be used for Wercker’s direct interactions with the Kubernetes cluster)
  2. prepare (local) Kubernetes configuration file – in order to work against the cluster using local kubectl commandline
  3. implement the application that is to be deployed onto the Kubernetes cluster – for example a simple Node application
  4. create the wercker.yml file (along with templates for Kubernetes deployment files) that describes the build steps that apply to the application and its deployment to Kubernetes
  5. push the application to a GitHub repository
  6. create a release in the Wercker console – associated with the GitHub Repository
  7. define the Wercker Pipelines for the application – using the Pipelines from the wercker.yml file
  8. define the automation pipeline – a chain of the pipelines defined in the previous step, triggered by event such as a commit in the GitHub repo
  9. define environment variables – specifically the Kubernetes endpoint and the user token to use for connecting to the Kubernetes cluster from the automated pipeline
  10. trigger the automation pipeline – for example through a commit to GitHub
  11. verify in Kubernetes – dashboard or command line – that the application is deployed and determine the public endpoint
  12. access the application
  13. iterate through steps 10..12 while evolving the application

Generate Wercker Token

image

Prepare local Kubernetes Configuration file

Create a config file in the users/<current user>/.kube directory which contains the server address for the Kubernetes cluster and the token generated in the Wercker user settings. The file looks something like this screenshot:

SNAGHTML10176445

Verify the correctness of the config file by running for example:

kubectl version

image

Or any other kubectl command.

Implement the application that is to be deployed onto the Kubernetes
cluster

In this example the application is a very simple Node/Express application that handles two types of HTTP requests: a GET request to the url path /about and a POST request to /simple-app. There is nothing special about the aplication – in fact it is thoroughly underwhelming. The functionality consists of returning a result that proves that application has been invoked successfully – and not much more.

The application source is found in https://github.com/lucasjellema/the-simple-app – mainly in the file app.js.

After implementing the app.js I can run and invoke the application locally:

image

Create the wercker.yml file for the application

The wercker.yml file provides instructions to the Wercker engine on how to execute the build and deploy steps. This step makes use of parameters the values for which are provided by the Wercker build engine at run time, partially from the values defined for environment variables at organization, application or pipeline level.

Here three pipelines are shown:

image

The build pipeline use the node:6.10 base Docker container image as its starting point. It adds the source code, executes npm install and generates TLS key and certificate. The push-to-releases pipeline stores the build outcome (the container image) in the configured container registry. The deploy-to-oke (oke == Oracle Kubernetes Engine) pipeline takes the container image and deploys it to the Kubernetes cluster – using the Kubernetes template files, as indicated in this screenshot.

image 

Along with the wercker.yml file we provide templates for Kubernetes deployment
files that describe the
deployment to Kubernetes.

The kubernetes-deployment.yml.template defines the Deployment (based on the container image with a single replica) and the service – exposing port 3000 from the container.

image

The ingress.yml.template file defines how the service is to exposed through the cluster ingress nginx.

Push the application – including the yml files for Wercker and Kubernetes to a GitHub repository

image

Create a release in the Wercker console – associated with the GitHub Repository

image

image

image

image

image

image

Define the Wercker Pipelines for the application – using the Pipelines from the wercker.yml file

image

Click on New Pipeline for each of the build pipelines in the wercker.yml file. Note: the build pipeline is predefined.

image

image

Define the automation pipeline

– a chain of the pipelines defined in the previous step, triggered by event such as a commit in the GitHub repo

image

image

Define environment variables

– specifically the Kubernetes endpoint and the user token to use for connecting to the Kubernetes cluster from the automated pipeline

SNAGHTML10a57e52

Trigger the automation pipeline – for example through a commit to GitHub

image


When the changes are pushed to GitHub, the web hook fires and the build pipeline in Wercker is triggered.

image

image

image

I even received an email from Wercker, alerting me about this issue:

image

It turns out I forgot to set the values for the environment variables KUBERNETES_MASTER and KUBERNETES_TOKEN. In this article it is the previous step, preceding this one, in reality I forgot to do it and ran into this error as a result.

After setting the correct values, I triggered the pipeline once more, with better luck this time.

image

image

Verify in Kubernetes – dashboard or command line – that the application is deployed

The deployment from Wercker to the Kubernetes Cluster was successful. Unfortunately, the Node application itself did not start as desired. And I was informed about this on the overview page for the relevant namespace – lucasjellema – on the Kubernetes dashboard – that I accessed by running

kubectl proxyimage

on my laptop and opening my browser at: http://127.0.0.1:8001/ui.


image

The logging for the pod made clear that there was a problem with the port mapping.

image

I fixed the code, committed and pushed to GitHub. The build pipeline was triggered and the application was built into a container that was successfully deployed on the Kubernetes cluster:

image

I now need to find out what the endpoint is where I can access the application. For that, I check out the Ingress created for the deployment – to find the value for the path: /lucasjellema

image

Next, I check the ingress service in the oracle-bmc namespace – as that is in my case the cluster wide ingress for all public calls into the cluster:

SNAGHTML10b0fd8f

This provides me with the public ip adress.

Access the Application

Calls to the simple-app application can now be made at: http://<public ip>/lucasjellema/simple-app (and http://<public ip>/lucasjellema/about):

SNAGHTML10b254f1

and

image

Note: because of a certificate issue, the call from Postman to the POST endpoint only succeeds after disabling certificate verification in the general settings:

image

image


 

Evolve the Application

From this point on it is very simple to further evolve the application. Modify the code, test locally, commit and push to Git – and the changed application is automatically built and deployed to the managed Kubernetes cluster.

A quick example:

I add support for /stuff to the REST API supported by simple-app:

image

The code is committed and pushed:

image

The Wercker pipeline is triggered

image

At this point, the application does not yet support requests to /stuff:

image

After a little less than 3 minutes, the full build, store and deploy to Kubernetes cluster pipeline is done:

image

And the new functionality is live from the publicly exposed Kubernetes environment:

image

Resources

Wercker Tutorial on Getting Started with Wercker Clusters Using Wercker Clusters – http://devcenter.wercker.com/docs/getting-started-with-wercker-clusters#exampleend2end

The post Oracle Managed Kubernetes Cloud– First Steps with Automated Deployment using Wercker Pipelines appeared first on AMIS Oracle and Java Blog.

Alfresco – Unable to move/rename a file/folder using AOS

Yann Neuhaus - Sat, 2017-12-02 05:00

When playing with the AOS implementation, a colleague of mine faced an interesting issue where he just wasn’t able to move or rename any files or folders. The creation and deletion were working properly but he was unable to move or rename anything when using a Network Location or a Network Drive. This environment was freshly installed with a front-end (Apache HTTPD) setup in SSL so we worked together to find out what was the issue. All workstations were impacted no matter what OS was used (Windows 7, 8, 10, aso…).

The Network Location or Drive were mounted using the following:

  • URL Style => https://alfresco_server_01.domain.com/alfresco/aos/
  • WebDAV Style => \\alfresco_server_01.domain.com@SSL\DavWWWRoot\alfresco\aos\

In all cases, the workstations were able to connect and create nodes in the Alfresco Server (through AOS), it meant that the parameters/configuration on the Alfresco and Apache HTTPD sides were pretty much OK but nevertheless we still checked them to be sure:

alfresco@alfresco_server_01:~$ grep -E "^alfresco\.|^share\." $CATALINA_HOME/shared/classes/alfresco-global.properties
alfresco.context=alfresco
alfresco.host=alfresco_server_01.domain.com
alfresco.port=443
alfresco.protocol=https
share.context=share
share.host=alfresco_server_01.domain.com
share.port=443
share.protocol=https
alfresco@alfresco_server_01:~$
alfresco@alfresco_server_01:~$
alfresco@alfresco_server_01:~$ grep aos $CATALINA_HOME/shared/classes/alfresco-global.properties
aos.baseUrlOverwrite=https://alfresco_server_01.domain.com:443/alfresco/aos
#aos.sitePathOverwrite=/alfresco/aos
alfresco@alfresco_server_01:~$

 

For me, the parameters above seemed correct at first sight. The Apache HTTPD being in SSL on the port 443 and redirecting everything to the Tomcat using the mod_jk, it is normal for the alfresco and share parameters to use https and the port 443 (and even if the Tomcat is actually not in SSL) because these values should reflect the front-end. For the “aos.baseUrlOverwrite”, it is normally to be used only in case you have a proxy server in front of your Alfresco and that this proxy isn’t an Apache HTTPD. Since my colleague was using Apache, this parameter wasn’t really needed but having it set to the correct value shouldn’t hurt either. The correct value for this parameter is also the front-end URL and it is the current value, or so it seemed.

With the above parameters, we were able to create any kind of files and folders in our Network Locations/Drives. I took some screenshots for this blog and I used a simple folder to demonstrate the issue and the solution. So creating a folder with the default name is working properly:

RenameFolder1

At this point, I had a new folder in my Alfresco Server which I could clearly see and manage via the Share client. So renaming it from Share wasn’t a problem but doing the same thing through AOS (Network Location or Drive) resulted in this:

RenameFolder2

At the same time, the following logs were generated on Alfresco side:

2017-11-14 08:34:50,342  ERROR [aoservices-err.StandardWebdavService] [ajp-nio-8009-exec-7] doMove: BAD REQUEST: Destination malformed
2017-11-14 08:34:50,442  ERROR [aoservices-err.StandardWebdavService] [ajp-nio-8009-exec-8] doMove: BAD REQUEST: Destination malformed
2017-11-14 08:34:50,544  ERROR [aoservices-err.StandardWebdavService] [ajp-nio-8009-exec-9] doMove: BAD REQUEST: Destination malformed
2017-11-14 08:34:50,647  ERROR [aoservices-err.StandardWebdavService] [ajp-nio-8009-exec-1] doMove: BAD REQUEST: Destination malformed

 

With the default log level, that’s not particularly helpful… From the Apache logs, it is pretty easy to see when the folder “New Folder” has been created (MKCOL @ 08:34:21) as well as when I tried to rename it (MOVE @ 08:34:50):

alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:21 +0100] "MKCOL /alfresco/aos/Sites/my-site/documentLibrary/New%20folder HTTP/1.1" 201 640 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:21 +0100] "PROPPATCH /alfresco/aos/Sites/my-site/documentLibrary/New%20folder HTTP/1.1" 207 971 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:21 +0100] "PROPFIND /alfresco/aos/Sites/my-site/documentLibrary/New%20folder/desktop.ini HTTP/1.1" 404 588 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:50 +0100] "PROPFIND /alfresco/aos HTTP/1.1" 207 5473 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:50 +0100] "PROPFIND /alfresco/aos/Sites/my-site/documentLibrary HTTP/1.1" 207 1784 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:50 +0100] "PROPFIND /alfresco/aos/Sites/my-site/documentLibrary/New%20folder HTTP/1.1" 207 1803 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:50 +0100] "PROPFIND /alfresco/aos/Sites/my-site/documentLibrary HTTP/1.1" 207 1784 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:50 +0100] "MOVE /alfresco/aos/Sites/my-site/documentLibrary/New%20folder HTTP/1.1" 400 1606 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:50 +0100] "MOVE /alfresco/aos/Sites/my-site/documentLibrary/New%20folder HTTP/1.1" 400 1711 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:50 +0100] "MOVE /alfresco/aos/Sites/my-site/documentLibrary/New%20folder HTTP/1.1" 400 1711 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:50 +0100] "MOVE /alfresco/aos/Sites/my-site/documentLibrary/New%20folder HTTP/1.1" 400 1711 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:50 +0100] "PROPFIND /alfresco/aos/Sites/my-site/documentLibrary/New%20folder HTTP/1.1" 207 1909 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"

 

As you can see above, the renaming failed and the Apache responded with a 400 error code. The error “doMove: BAD REQUEST” on the Alfresco side reminded me of this JIRA but the outcome of this ticket was that the parameter “aos.baseUrlOverwrite” was wrongly set… In our case, its value was “https://alfresco_server_01.domain.com:443/alfresco/aos” (as shown above) and this seemed to be the correct URL… But in fact it wasn’t.

Just to avoid any uncertainty, we tried to change the value to “https://alfresco_server_01.domain.com/alfresco/aos” (so just removing the port :443 which technically can be here or not…) and then restart Alfresco… After doing that, the rename was actually working:

RenameFolder3

So magically the issue was gone… The associated Apache HTTPD logs showed this time a 201 return code:

alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:48:04 +0100] "PROPFIND /alfresco/aos HTTP/1.1" 207 5347 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:48:04 +0100] "PROPFIND /alfresco/aos/Sites/my-site/documentLibrary/New%20folder HTTP/1.1" 207 1799 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:48:05 +0100] "PROPFIND /alfresco/aos/Sites/my-site/documentLibrary HTTP/1.1" 207 1780 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:48:05 +0100] "PROPFIND /alfresco/aos/Sites HTTP/1.1" 207 1736 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:48:05 +0100] "PROPFIND /alfresco HTTP/1.1" 302 224 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:48:05 +0100] "PROPFIND /alfresco/ HTTP/1.1" 207 1858 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:48:05 +0100] "PROPFIND / HTTP/1.1" 302 572 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:48:05 +0100] "PROPFIND /share/page/repository HTTP/1.1" 501 1490 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:48:06 +0100] "MOVE /alfresco/aos/Sites/my-site/documentLibrary/New%20folder HTTP/1.1" 201 640 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:48:07 +0100] "PROPFIND /alfresco/aos/Sites/my-site/documentLibrary/Renamed%202 HTTP/1.1" 207 1797 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"

 

Conclusion for this blog? Take care when you want to set the “aos.baseUrlOverwrite”, do not add the port if it is not really necessary! Another solution to this issue would be to just comment the “aos.baseUrlOverwrite” parameter since it is not needed when using Apache HTTPD. I personally never use this parameter (keeping it commented) because I’m always using Apache :).

 

 

Cet article Alfresco – Unable to move/rename a file/folder using AOS est apparu en premier sur Blog dbi services.

WebLogic – SSO/Atn/Atz – 403 Forbidden, another issue

Yann Neuhaus - Sat, 2017-12-02 03:30

Earlier today, I posted another blog with almost the same title but the similarities stop here. The thing is that when you access your application, there aren’t much error codes possible so if there is an issue, it’s often the same generic message that is being displayed on the browser.

To get some background on the issue that I will present below, we are usually setting up the WebLogic Server, enabling the SSO on the WLS level, aso… Then there are different Application Teams that prepare their specific application war files and deploy them into our WLS. Since all teams are using standard procedures and IQs to deploy all that, the applications are properly working in SSO 99% of the time but human errors can happen, especially in the dev environments where there are a less verification…

So after deploying their customized war files, an Application Team tried to access it using the SSO URL but then got a ‘403 – Forbidden’ error message, crap. As we are responsible for the whole platform, they are usually directly coming to us so that we can check what is wrong. So as always: enable the debug logs, find out what the issue is, where a mistake was done and how to solve it. In this case (and contrary to the previous blog), the SAML2 response was correct and accepted by WebLogic so the SSO process was already going one step further, this is why I will just skip the first part of the logs (as well as the LDAP authentication & retrieval of groups) and only show what is happening afterwards (so no Atn but only Atz):

<Nov 12, 2017, 5:43:25,15 PM UTC> <Debug> <SecurityAtz> <AuthorizationManager will use common security for ATZ>
<Nov 12, 2017, 5:43:25,15 PM UTC> <Debug> <SecurityAtz> <weblogic.security.service.WLSAuthorizationServiceWrapper.isAccessAllowed>
<Nov 12, 2017, 5:43:25,15 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Identity=Subject: 3
        Principal = class weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
        Principal = class weblogic.security.principal.WLSGroupImpl("readers")
        Principal = class weblogic.security.principal.WLSGroupImpl("superusers")
>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Roles=[ "Anonymous" ]>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Resource=type=<url>, application=D2, contextPath=/D2, uri=/X3_Portal.jsp, httpMethod=GET>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Direction=ONCE>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> <XACML Authorization isAccessAllowed(): input arguments:>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> < Subject: 3
        Principal = weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
        Principal = weblogic.security.principal.WLSGroupImpl("readers")
        Principal = weblogic.security.principal.WLSGroupImpl("superusers")
>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> < Roles:Anonymous>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> < Resource: type=<url>, application=D2, contextPath=/D2, uri=/X3_Portal.jsp, httpMethod=GET>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> < Direction: ONCE>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> < Context Handler: >
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> <Accessed Subject: Id=urn:oasis:names:tc:xacml:2.0:subject:role, Value=[Anonymous]>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> <Evaluate urn:oasis:names:tc:xacml:1.0:function:string-at-least-one-member-of([consumer,consumer],[Anonymous]) -> false>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> <primary-rule evaluates to NotApplicable because of Condition>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> <urn:bea:xacml:2.0:entitlement:resource:type@E@Furl@G@M@Oapplication@ED2@M@OcontextPath@E@UD2@M@Ouri@E@U@K@M@OhttpMethod@EGET, 1.0 evaluates to Deny>
<Nov 12, 2017, 5:43:25,17 PM UTC> <Debug> <SecurityAtz> <XACML Authorization isAccessAllowed(): returning DENY>
<Nov 12, 2017, 5:43:25,17 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed AccessDecision returned DENY>
<Nov 12, 2017, 5:43:25,17 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AuthorizationServiceImpl.isAccessAllowed returning adjudicated: false>

 

As you can see at the end of the log, the access is denied (‘isAccessAllowed returning adjudicated: false’) and if you check the previous lines, you can see that this is actually because the function ‘string-at-least-one-member-of’ is expecting the user to have the role ‘consumer’ but here the user only have the role ‘Anonymous’. There is a mismatch and therefore the access is denied which caused the ‘403 – Forbidden’ message. So where are those roles assigned? It is actually not on the IdP Partner or LDAP side but on the WebLogic side directly, or rather on the Application side to be more precise…

When working with SSO solutions, there are some additional configurations that you will need to incorporate to your application war file like for example what is transport mode for all communications, which URLs (/context) are not to be authenticated through SSO and which ones are, which roles the users need, aso… This is all done (for WebLogic) inside the web.xml and weblogic.xml.

In this case, the ‘consumer’ role shown above is defined in the web.xml file inside the ‘<auth-constraint>’ (Authorization constraint) so it basically says that this is the only authorized role to perform constrained requests. This is an example of configuration that you can put in your web.xml (this one is an extract of what can be done for Documentum D2 with SAML2 SSO):

[weblogic@weblogic_server_01 D2]$ tail -50 WEB-INF/web.xml
  <security-constraint>
    <web-resource-collection>
      <web-resource-name>SSO Public</web-resource-name>
      <description>Non-authenticated resources</description>
      <url-pattern>/help/en/*</url-pattern>
      <url-pattern>/resources/*</url-pattern>
	  ...
      <http-method>GET</http-method>
      <http-method>POST</http-method>
    </web-resource-collection>
  </security-constraint>

  <security-constraint>
    <web-resource-collection>
      <web-resource-name>SSO Private</web-resource-name>
      <description>Authenticated resources</description>
      <url-pattern>/*</url-pattern>
      <http-method>GET</http-method>
      <http-method>POST</http-method>
    </web-resource-collection>
    <auth-constraint>
      <description>Authorized role</description>
      <role-name>consumer</role-name>
    </auth-constraint>
    <user-data-constraint>
      <description>User Data</description>
      <transport-guarantee>CONFIDENTIAL</transport-guarantee>
    </user-data-constraint>
  </security-constraint>

  <security-role>
    <role-name>consumer</role-name>
  </security-role>

  ...
[weblogic@weblogic_server_01 D2]$

 

So the above configuration exactly match what WebLogic requires and what is shown in the logs. The user must have a role that is ‘consumer’ to be able to access the Application. The only question left is why the users aren’t assigned to this role if they have been authenticated via the SAML2 SSO and LDAP and that’s where the issue is in this case. In the web.xml, you can define what security should apply to your application but for the assignment to the security roles, you should rather take a look at the weblogic.xml file. You can assign users using their principals (coming from the WLS Security Realm) but it is usually better to use groups instead to avoid managing users individually (more information there). So you already understood it, the issue in this case was simply that the Application Team configured the security roles on the web.xml file but they forgot the assignments on the weblogic.xml and therefore the issue was solved by simply adding the following lines in this file:

[weblogic@weblogic_server_01 D2]$ tail -5 WEB-INF/weblogic.xml
  <security-role-assignment>
      <role-name>consumer</role-name>
      <principal-name>users</principal-name>
  </security-role-assignment>
</weblogic-web-app>
[weblogic@weblogic_server_01 D2]$

 

With these four simple lines, all existing users from the LDAP (that is configured in our WebLogic) are getting automatically granted the role ‘consumer’ and are therefore allowed to access the application.

From the WebLogic Administration Console, you can check what is the assignment of a security role using these steps:

  1. Login to the Admin Console using your weblogic account
  2. Navigate to the correct page: DOMAIN > Deployments > YourApp (click on the name) > Security > URL Patterns > Roles > YourRole (click on the name)

The list of roles can be seen here:

SecurityRole1

Then after clicking on the name of the role, you can see the conditions for the assignments:

SecurityRole2

To compare the logs before/after, this is what is being printed to the logs after the correction:

<Nov 12, 2017, 6:30:55,10 PM UTC> <Debug> <SecurityAtz> <AuthorizationManager will use common security for ATZ>
<Nov 12, 2017, 6:30:55,10 PM UTC> <Debug> <SecurityAtz> <weblogic.security.service.WLSAuthorizationServiceWrapper.isAccessAllowed>
<Nov 12, 2017, 6:30:55,10 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Identity=Subject: 3
        Principal = class weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
        Principal = class weblogic.security.principal.WLSGroupImpl("readers")
        Principal = class weblogic.security.principal.WLSGroupImpl("superusers")
>
<Nov 12, 2017, 6:30:55,11 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Roles=[ "Anonymous" "consumer" ]>
<Nov 12, 2017, 6:30:55,11 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Resource=type=<url>, application=D2, contextPath=/D2, uri=/X3_Portal.jsp, httpMethod=GET>
<Nov 12, 2017, 6:30:55,11 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Direction=ONCE>
<Nov 12, 2017, 6:30:55,11 PM UTC> <Debug> <SecurityAtz> <XACML Authorization isAccessAllowed(): input arguments:>
<Nov 12, 2017, 6:30:55,11 PM UTC> <Debug> <SecurityAtz> < Subject: 3
        Principal = weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
        Principal = weblogic.security.principal.WLSGroupImpl("readers")
        Principal = weblogic.security.principal.WLSGroupImpl("superusers")
>
<Nov 12, 2017, 6:30:55,11 PM UTC> <Debug> <SecurityAtz> < Roles:Anonymous, consumer>
<Nov 12, 2017, 6:30:55,11 PM UTC> <Debug> <SecurityAtz> < Resource: type=<url>, application=D2, contextPath=/D2, uri=/X3_Portal.jsp, httpMethod=GET>
<Nov 12, 2017, 6:30:55,11 PM UTC> <Debug> <SecurityAtz> < Direction: ONCE>
<Nov 12, 2017, 6:30:55,11 PM UTC> <Debug> <SecurityAtz> < Context Handler: >
<Nov 12, 2017, 6:30:55,12 PM UTC> <Debug> <SecurityAtz> <Accessed Subject: Id=urn:oasis:names:tc:xacml:2.0:subject:role, Value=[Anonymous,consumer]>
<Nov 12, 2017, 6:30:55,12 PM UTC> <Debug> <SecurityAtz> <Evaluate urn:oasis:names:tc:xacml:1.0:function:string-at-least-one-member-of([consumer,consumer],[Anonymous,consumer]) -> true>
<Nov 12, 2017, 6:30:55,12 PM UTC> <Debug> <SecurityAtz> <primary-rule evaluates to Permit>
<Nov 12, 2017, 6:30:55,12 PM UTC> <Debug> <SecurityAtz> <urn:bea:xacml:2.0:entitlement:resource:type@E@Furl@G@M@Oapplication@ED2@M@OcontextPath@E@UD2@M@Ouri@E@U@K@M@OhttpMethod@EGET, 1.0 evaluates to Permit>
<Nov 12, 2017, 6:30:55,12 PM UTC> <Debug> <SecurityAtz> <XACML Authorization isAccessAllowed(): returning PERMIT>
<Nov 12, 2017, 6:30:55,12 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed AccessDecision returned PERMIT>
<Nov 12, 2017, 6:30:55,12 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AuthorizationServiceImpl.isAccessAllowed returning adjudicated: true>

 

Access is allowed. :)

 

 

Cet article WebLogic – SSO/Atn/Atz – 403 Forbidden, another issue est apparu en premier sur Blog dbi services.

WebLogic – SSO/Atn/Atz – 403 Forbidden, a first issue

Yann Neuhaus - Sat, 2017-12-02 02:00

In a previous blog, I explained how it was possible to enable the SSO/Atn/Atz (SSO/Authentication/Authorization) debug logs in order to troubleshoot an issue. In this blog, I will show the logs generated by an issue that I had to deal with last month at one of our customers. This issue will probably not occur very often but it is a pretty funny one so I wanted to share it!

So the issue I will talk about in this blog happened on an environment that is configured with a SAML2 SSO. With a fully working SAML2 SSO, the WebLogic hosting the application is supposed to redirect the end-user to the IdP Partner (with a SAML2 request) which process it and then redirect the end-user again to the WebLogic (with the SAML2 response) which process the response and finally grant access to the Application. On this issue, both redirections were apparently happening properly but then for an unknown reason the WebLogic Server was blocking the access to the application with a “403 – Forbidden” message.

Obviously the first thing I did is to enable the debug logs and then I replicated the issue. These are the logs that I could see on the Managed Server log file:

<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Service> <SAML2Servlet: Processing request on URI '/saml2/sp/acs/post'>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): request URI is '/saml2/sp/acs/post'>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): service URI is '/sp/acs/post'>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): returning service type 'ACS'>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Service> <Assertion consumer service: processing>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Service> <get SAMLResponse from http request:PBvbnNSJ1cXMlFtZzaXM6bmHhtbG5zm46NhwOlJlOnNhbWxHbWxIb2Fc3wP6dGM6
U0FiB4bWxuczp4NTAwPSJ1cm46b2FzNTDoyLjA6YXNzZXJ0aW9uIaXM6bmFtZXM6
U0FdG9jb2wiIHhtbG5zOmRzaWc9Imh0dHA6Ly93NTDoyLjA6cHJvd3cudzMub3Jn
aHR0cDoa5vcmcvMjAwMS9W5zdGFuY2vL3d3dy53MyYTUxTY2hlbWEtUiIERlc3Rp
MWNxM2FjNzI1ZDjYmIVhNDM1Zlzc3VlSW5zdGFudD0ijhlNjc3OTkiIEMjAxNy0x
LzINpZyMiIHhtwMDAvMDkveG1sZHbG5DovL3d3dy53My5vczOmVuYz0iaHR0cmcv
MMS8wNC94bWxjAwlbmWxuczpzYW1sPMjIiB4bSJ1aXM6bmFtZXM6dcm46b2FzGM6
dGdXRlOM6U0FNZXM6YXR0cmliTDoyLjAAiIHhtbG5zOnhzaT6cHJvZmlslg1MD0i
bmF0aW9JodHRwczovL5ldS5uPSub3Zhc3BoY2hicy1DIyMinRpcyzdDIyM5uZXQ6
ODA4NSMvcG9zdCI9zYW1sMi9zcC3SUhwRHRuN3I1WH9hY3gSUQ9ImlkLUhhTTFha
Z3hBiIEmVzcWW5URXhybHJlG9uc2VUbz0RVFGbWt1VkRaNC0iXzB4ZluUGM1Mjk2
MS0xNw6SXNzdWjo0OTFZlcnNpVyIEZvo1MloiIlQxMb249IjIuMCI+PHNhbWcm1h
...
LTExIgTDEyPSIyMDE3LFjQ525PcLTE2VETExLTOk2VDOjUyWimdGVym90TEyOjU0
OjUyWiI+PHNh8c2FtbDpBdWRpZW5bWw6QXVkabj4jWVuY2VSZXN0cmljdGlvZT5T
c3NGVkVHJhb3b3JkUHJvdG1sOkF1dGhuQ29udGV4VjdnNwb3J0PC9zYWdENsYXNz
QXV0aG5gQXV0TdGF0LTExLTE2VDEaG5JZW1lbnQSIyMDE3bnN0YW50PyOjQ5OjUy
aEV25PcVucnhPSIyMDE3LTExEZmZ2IiBkFmdGVyLTEJWTTZXNzaW9uTm90T2VDEz
WivUWuZGV4PSJpZC13UlVMWGRYOXd6xWzc2lvbklRThFZDJwRDdIgU2VR210OUc0
dWJXYSUQ8L3NhRE1PQ19XlfREVWTQU1MMl9FbnbWw6QXVkaWVuY2U+RpdHlfPC9z
YWRpb24+P1sOkF1ZGllbHJpY3mPHNhNlUmVzdC9zYW1sOkNvbmRpdGlvbnM+bWw6
YXNzUzpjbYW1YXNpczpuIuMlmVmPnVybjpvDphYxzp0YzpTQU1MOjGFzc2VzOlBh
OjAXh0Pj0OjUyWiI+PHNsOkF1dGhuQhxzYW129udGV4bWw6QXV0aG5Db250ZdENs
UmVnRlepBdXRobkNvbzYWmPjwvc2FtbDF1dGhuU1sOkHQ+PC93RhdGVtZW50Pjwv
c2F9zY25zZtbDpBcW1scDpSZXNwb3NlcnRpb24+PCT4=
>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Service> <BASE64 decoded saml message:<samlp:Response xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:enc="http://www.w3.org/2001/04/xmlenc#" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:x500="urn:oasis:names:tc:SAML:2.0:profiles:attribute:X500" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" Destination="https://weblogic_server_01/saml2/sp/acs/post" ID="id-HpDtn7r5XxxAQFYnwSLXZmkuVgIHTExrlreEDZ4-" InResponseTo="_0x7258edc52961ccbd5a435fb13ac67799" IssueInstant="2017-11-12T12:23:42Z" Version="2.0"><saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://idp_partner_01/fed/idp</saml:Issuer><dsig:Signature><dsig:SignedInfo><dsig:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/><dsig:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/><dsig:Reference URI="#id-HpDtn7r5XxxAQFYnwSLXZmkuVgIHTExrlreEDZ4-"><dsig:Transforms><dsig:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/><dsig:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/></dsig:Transforms><dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><dsig:DigestValue>YGtUZvsfo3z51AsBo7UDhbd6Ts=</dsig:DigestValue></dsig:Reference></dsig:SignedInfo><dsig:SignatureValue>al8sJwbqzjh1qgM3Sj0QrX1aZjwyI...JB6l4jmj91BdQrYQ7VxFzvNLczZ2brJSdLLig==</dsig:SignatureValue><dsig:KeyInfo><dsig:X509Data><dsig:X509Certificate>MIwDQUg+nhYqGZ7pCgBQAwTTELMAkGA1UEBhMCQk1ZhQ...aATPRCd113tVqsvCkUwpfQ5zyUHaKw4FkXmiT2nzxxHA==</dsig:X509Certificate></dsig:X509Data></dsig:KeyInfo></dsig:Signature><samlp:Status><samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/></samlp:Status><saml:Assertion ID="id-0WrMNbOz6wsuZdFPhfjnw7WIXXQ6k89-1AgHZ9Oi" IssueInstant="2017-11-12T12:23:42Z" Version="2.0"><saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://idp_partner_01/fed/idp</saml:Issuer><dsig:Signature><dsig:SignedInfo><dsig:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/><dsig:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/><dsig:Reference URI="#id-0WrMNbOz6wsuZdFPhfjnw7WIXXQ6k89-1AgHZ9Oi"><dsig:Transforms><dsig:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/><dsig:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/></dsig:Transforms><dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><dsig:DigestValue>7+jZtq8SpY3BKVaFjIFeEJm51cA=</dsig:DigestValue></dsig:Reference></dsig:SignedInfo><dsig:SignatureValue>GIlXt4B4rVFoDJRxidpZO73gXB68Dd+mcpoV9DKrjBBjLRz...zGTDcEYY2MG8FgtarZhVQGc4zxkkSg8GRT6Wng3NEuTUuA==</dsig:SignatureValue><dsig:KeyInfo><dsig:X509Data><dsig:X509Certificate>MIwDQUg+nhYqGZ7pCgBQAwTTELMAkGA1UEBhMCQk1ZhQ...aATPRCd113tVqsvCkUwpfQ5zyUHaKw4FkXmiT2nzxxHA==</dsig:X509Certificate></dsig:X509Data></dsig:KeyInfo></dsig:Signature><saml:Subject><saml:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified">PATOU_MORGAN</saml:NameID><saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer"><saml:SubjectConfirmationData InResponseTo="_0x7258edc52961ccbd5a435fb13ac67799" NotOnOrAfter="2017-11-12T12:28:42Z" Recipient="https://weblogic_server_01/saml2/sp/acs/post"/></saml:SubjectConfirmation></saml:Subject><saml:Conditions NotBefore="2017-11-12T12:23:42Z" NotOnOrAfter="2017-11-12T12:28:42Z"><saml:AudienceRestriction><saml:Audience>SAML2_Entity_ID_01</saml:Audience></saml:AudienceRestriction></saml:Conditions><saml:AuthnStatement AuthnInstant="2017-11-12T12:23:42Z" SessionIndex="id-oX9wXdpGmt9GQlVffvY4hEIRULEd25nrxDzE8D7w" SessionNotOnOrAfter="2017-11-12T12:38:42Z"><saml:AuthnContext><saml:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport</saml:AuthnContextClassRef></saml:AuthnContext></saml:AuthnStatement></saml:Assertion></samlp:Response>>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Service> <<samlp:Response> is signed.>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.IdentityAssertionServiceImpl.assertIdentity(SAML2.Assertion.DOM)>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.IdentityAssertionTokenServiceImpl.assertIdentity(SAML2.Assertion.DOM)>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Atn> <SAML2IdentityAsserterProvider: start assert SAML2 token>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Atn> <SAML2IdentityAsserterProvider: SAML2IdentityAsserter: tokenType is 'SAML2.Assertion.DOM'>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: Start verify assertion signature>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: The assertion is signed.>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: End verify assertion signature>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: Start verify assertion attributes>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: End verify assertion attributes>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: Start verify assertion issuer>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: End verify assertion issuer>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: Start verify assertion conditions>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.IdentityAssertionTokenServiceImpl.assertIdentity - IdentityAssertionException>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Service> <[Security:090377]Identity Assertion Failed, weblogic.security.spi.IdentityAssertionException: [Security:090377]Identity Assertion Failed, weblogic.security.spi.IdentityAssertionException: [Security:096537]Assertion is not yet valid (NotBefore condition).>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Service> <exception info
javax.security.auth.login.LoginException: [Security:090377]Identity Assertion Failed, weblogic.security.spi.IdentityAssertionException: [Security:090377]Identity Assertion Failed, weblogic.security.spi.IdentityAssertionException: [Security:096537]Assertion is not yet valid (NotBefore condition).
        at com.bea.common.security.internal.service.IdentityAssertionServiceImpl.assertIdentity(IdentityAssertionServiceImpl.java:89)
        at sun.reflect.GeneratedMethodAccessor1410.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at com.bea.common.security.internal.utils.Delegator$ProxyInvocationHandler.invoke(Delegator.java:64)
		...
>

 

I cut some of the strings above (like all signatures, the SSL Certificates, aso…) because it was really too big and it is not really important. What is important above is the java exception. Indeed, the Identity Assertion failed because of the following: ‘Assertion is not yet valid (NotBefore condition)’. This message might seems a little bit mystical but it actually points you right to the issue: the ‘NotBefore’ condition is causing the validation to fail.

So why is that? Well when you have a SAML2 SSO like I said above, you first have a request and then a response. For security reasons, there are some conditions that apply on them and that need to be fulfilled for the SSO to be working. To understand that a little bit better, I took the decoded SAML2 response from the logs above (line 32) and I reformatted it into an XML format so it is more readable:

<samlp:Response xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:enc="http://www.w3.org/2001/04/xmlenc#" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:x500="urn:oasis:names:tc:SAML:2.0:profiles:attribute:X500" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" Destination="https://weblogic_server_01/saml2/sp/acs/post" ID="id-HpDtn7r5XxxAQFYnwSLXZmkuVgIHTExrlreEDZ4-" InResponseTo="_0x7258edc52961ccbd5a435fb13ac67799" IssueInstant="2017-11-12T12:23:42Z" Version="2.0">
	<saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://idp_partner_01/fed/idp</saml:Issuer>
	<dsig:Signature>
		<dsig:SignedInfo>
			<dsig:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
			<dsig:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/>
			<dsig:Reference URI="#id-HpDtn7r5XxxAQFYnwSLXZmkuVgIHTExrlreEDZ4-">
				<dsig:Transforms>
					<dsig:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>
					<dsig:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
				</dsig:Transforms>
				<dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/>
				<dsig:DigestValue>YGtUZvsfo3z51AsBo7UDhbd6Ts=</dsig:DigestValue>
			</dsig:Reference>
		</dsig:SignedInfo>
		<dsig:SignatureValue>al8sJwbqzjh1qgM3Sj0QrX1aZjwyI...JB6l4jmj91BdQrYQ7VxFzvNLczZ2brJSdLLig==</dsig:SignatureValue>
		<dsig:KeyInfo>
			<dsig:X509Data>
				<dsig:X509Certificate>MIwDQUg+nhYqGZ7pCgBQAwTTELMAkGA1UEBhMCQk1ZhQ...aATPRCd113tVqsvCkUwpfQ5zyUHaKw4FkXmiT2nzxxHA==</dsig:X509Certificate>
			</dsig:X509Data>
		</dsig:KeyInfo>
	</dsig:Signature>
	<samlp:Status>
		<samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>
	</samlp:Status>
	<saml:Assertion ID="id-0WrMNbOz6wsuZdFPhfjnw7WIXXQ6k89-1AgHZ9Oi" IssueInstant="2017-11-12T12:23:42Z" Version="2.0">
		<saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://idp_partner_01/fed/idp</saml:Issuer>
		<dsig:Signature>
			<dsig:SignedInfo>
				<dsig:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
				<dsig:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/>
				<dsig:Reference URI="#id-0WrMNbOz6wsuZdFPhfjnw7WIXXQ6k89-1AgHZ9Oi">
					<dsig:Transforms>
						<dsig:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>
						<dsig:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
					</dsig:Transforms>
					<dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/>
					<dsig:DigestValue>7+jZtq8SpY3BKVaFjIFeEJm51cA=</dsig:DigestValue>
				</dsig:Reference>
			</dsig:SignedInfo>
			<dsig:SignatureValue>GIlXt4B4rVFoDJRxidpZO73gXB68Dd+mcpoV9DKrjBBjLRz...zGTDcEYY2MG8FgtarZhVQGc4zxkkSg8GRT6Wng3NEuTUuA==</dsig:SignatureValue>
			<dsig:KeyInfo>
				<dsig:X509Data>
					<dsig:X509Certificate>MIwDQUg+nhYqGZ7pCgBQAwTTELMAkGA1UEBhMCQk1ZhQ...aATPRCd113tVqsvCkUwpfQ5zyUHaKw4FkXmiT2nzxxHA==</dsig:X509Certificate>
				</dsig:X509Data>
			</dsig:KeyInfo>
		</dsig:Signature>
		<saml:Subject>
			<saml:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified">PATOU_MORGAN</saml:NameID>
			<saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">
				<saml:SubjectConfirmationData InResponseTo="_0x7258edc52961ccbd5a435fb13ac67799" NotOnOrAfter="2017-11-12T12:28:42Z" Recipient="https://weblogic_server_01/saml2/sp/acs/post"/>
			</saml:SubjectConfirmation>
		</saml:Subject>
		<saml:Conditions NotBefore="2017-11-12T12:23:42Z" NotOnOrAfter="2017-11-12T12:28:42Z">
			<saml:AudienceRestriction>
				<saml:Audience>SAML2_Entity_ID_01</saml:Audience>
			</saml:AudienceRestriction>
		</saml:Conditions>
		<saml:AuthnStatement AuthnInstant="2017-11-12T12:23:42Z" SessionIndex="id-oX9wXdpGmt9GQlVffvY4hEIRULEd25nrxDzE8D7w" SessionNotOnOrAfter="2017-11-12T12:38:42Z">
			<saml:AuthnContext>
				<saml:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport</saml:AuthnContextClassRef>
			</saml:AuthnContext>
		</saml:AuthnStatement>
	</saml:Assertion>
</samlp:Response>

 

As you can see on the XML, there are two conditions that apply on the SAML2 response:

  • The usage of the response needs to take place ‘NotBefore’ the current time
  • The usage of the response needs to take place ‘NotOnOrAfter’ the current time + 5 minutes

In this case, the NotBefore is set to ‘2017-11-12T12:23:42Z’ which is the current time of the IdP Partner Server. However you can see in the logs that the WebLogic Server hosting the application is actually one second before this time (Nov 12, 2017 12:23:41 PM UTC) and therefore the NotBefore restriction applies and the WebLogic Server hosting the application has no other choice than to return a ‘403 – Forbidden’ message because the SAML2 response is NOT valid.

In this case, the NTP daemon (Network Time Protocol) on the IdP Partner Linux server has been restarted and the time on this server has been resynched which solved the issue. Having a server in the future can cause some interesting behaviors :).

 

 

Cet article WebLogic – SSO/Atn/Atz – 403 Forbidden, a first issue est apparu en premier sur Blog dbi services.

Forms Builder won't connect to database - ORA 28040

Tom Kyte - Fri, 2017-12-01 16:46
I've installed an Oracle 12.2 database on my PC running Windows 10. That seems to be fine and I can connect to the database and the sample HR database is all there. I have then installed the Oracle Developer Suite v10.1.2. That installation appears t...
Categories: DBA Blogs

Service and module

Tom Kyte - Fri, 2017-12-01 16:46
Dear Team, Hope doing well..!!!! please help me to understand difference between service and module in oracle database. Whats use of module? Thanks Pradeep
Categories: DBA Blogs

Attracting Visitors To Your Website

Nilesh Jethwa - Fri, 2017-12-01 15:28

It’s not that easy to go from zero visitors to thousands of potential customers in an instant. But if you implement the right traffic-generating strategy, you can increase the number of visitors coming in to your website. If you can get enough traffic, that means you can generate sales. And you can check the main components in your salesto get more traffic and keep it coming.

Getting Instant and Cheap Traffic

The key components in the sales process that you need to test are opt-in offer, navigation, order form, and sales copy. You need to test them first before implementing your large-scale traffic generating campaign. Why is it important? Because this will help you prevent losing huge money once the campaign fails. Here are some of the best ways to attract visitors to your site:

  • Test your site. There are actually plenty of components to test in your website. That includes your web design, content, and many others. However, stick to what’s more important when testing your website. The basics are your salescopy, order process, opt-in offer, and site navigation. Later on, you can test other components once your sales generation is stable. 
  • Use Yahoo! Search Marketing. Pay-per-click search engines offer cheap and instant qualified traffic. But you need to come up with some targeted keywords in order to get results. Bidding to appear on top listings gives visibility on search results like Yahoo, MSN, and others. This will help you reach 80% of internet users across the globe. 
  • Offer free content such as eBooks. Providing irresistible free resources in exchange of priceless publicity is a common practice among digital marketers today. For instance, a valuable eBook or well-written blog posts can generate loads of web traffic without spending a dime. Just don’t use sales pitch in writing them so you won’t shoo potential customers away.

Read more at https://www.infocaptor.com/dashboard/how-to-attract-visitors-to-your-site

Pages

Subscribe to Oracle FAQ aggregator