Skip navigation.

Feed aggregator

D2 xPlore search on apostrophes

Yann Neuhaus - Wed, 2015-04-22 03:32

When using the D2 searches you are likely to go into trouble regarding special characters such as apostrophes. The goal in this blog is to show you how to parameterize new special character searches.

In many documents you'll have text with apostrophes or special characters that you want to search on. But unfortunately by default the D2 search will return nothing when you enter apostrophes directly into the search bar. The xPlore will replace special characters by spaces and store the two words one after the other to match them more easily in this order.

In fact this is not a D2 issue. Your xPlore is likely to not be set for special characters handling. By default xPlore is set to recognize apostrophes but in Word for example you have different kind of apostrophes. These characters have a different ascii code, so xPlore doesn't recognize them.

To solve this issue you simply have to tell xPlore to handle Word's apostrophes (or whatever character you want to search on).

In order to do this, login to your xPlore server then edit the following file:

Path$DSEARCH_HOME/config/indexserverconfig.xml

Find the line with:

Line special-characters="@#$%^_~`&:.()-+='/\[]{}" context-characters="!,;?""

Then add your apostrophes or special characters as follow (copy and past directly from Word to the file):

New Line"@#$%^_~`&:.()-+='/\[]{}’‘„“"

And save the file.

Now, new indexed documents can be searched with apostrophes. But note that if you want the older documents to be searchable as well, you will need to re-index the whole repository.

Weblogic ThreadPool has stuck threads

Yann Neuhaus - Wed, 2015-04-22 02:49

In Weblogic it is common to have this warning: ThreadPool has stuck threads. Here we will see a way to determine which can be the cause.

When monitoring Weblogic you can notice that time to time your servers are going in Warning mode. And when clicking on warnings you see this screen:

Warning-reason.png

The reason is presented as "ThreadPool has stuck threads". So it doesn't help a lot. But we can have a deeper view and maybe a real cause.

Now click on the server name from your list, then go to Monitoring -> Threads.

The Hogging Thread Count column shows how many threads seems stuck. The Pending User Request Count column shows the number of requests not delivered to the users. If it is different than 0 your users are impacted.

In order to visualize the real state of threads click on Dump Thread Stacks:

Monitoring-threads-Edited.png

Some threads are marked as stuck whereas they aren't. If the process handled by the thread is too long to achieve then Weblogic will detect it as stuck. By default Weblogic will detect stuck threads after 600 seconds (10 minutes) of waiting (This parameter can be changed).

When the thread dump is displayed you can search for thread with STUCK status:

Thread-dump.png

Here you can see that the thread is stuck in java.lang.Object.wait() function. It means that the thread is waiting for a result or another process to end. In this particular case we can see that the function com.crystaldecisions.sdk.occa.managedreports.ras.internal.CECORBACommunicationAdapter.request() was executed just before waiting, so the thread is likely to wait for the result for this "request" function.

As what we thought the issue came from a reporting server that could not deliver the reports anymore. That's why we had some stuck threads.

Stuck threads are generally generated by the application itself or some other components which do not have to do with weblogic.

Last point, you can check if the thread is stuck in the previous view like this:

Threads-stuck-Edited.png

Coding in PL/SQL in C style, UKOUG, OUG Ireland and more

Pete Finnigan - Wed, 2015-04-22 00:05

My favourite language is hard to pin point; is it C or is it PL/SQL? My first language was C and I love the elegance and expression of C. Our product PFCLScan has its main functionallity written in C. The....[Read More]

Posted by Pete On 23/07/14 At 08:44 PM

Categories: Security Blogs

Integrating PFCLScan and Creating SQL Reports

Pete Finnigan - Wed, 2015-04-22 00:05

We were asked by a customer whether PFCLScan can generate SQL reports instead of the normal HTML, PDF, MS Word reports so that they could potentially scan all of the databases in their estate and then insert either high level....[Read More]

Posted by Pete On 25/06/14 At 09:41 AM

Categories: Security Blogs

Automatically Add License Protection and Obfuscation to PL/SQL

Pete Finnigan - Wed, 2015-04-22 00:05

Yesterday we released the new version 2.0 of our product PFCLObfuscate . This is a tool that allows you to automatically protect the intellectual property in your PL/SQL code (your design secrets) using obfuscation and now in version 2.0 we....[Read More]

Posted by Pete On 17/04/14 At 03:56 PM

Categories: Security Blogs

Twitter Oracle Security Open Chat Thursday 6th March

Pete Finnigan - Wed, 2015-04-22 00:05

I will be co-chairing/hosting a twitter chat on Thursday 6th March at 7pm UK time with Confio. The details are here . The chat is done over twitter so it is a little like the Oracle security round table sessions....[Read More]

Posted by Pete On 05/03/14 At 10:17 AM

Categories: Security Blogs

PFCLScan Reseller Program

Pete Finnigan - Wed, 2015-04-22 00:05

We are going to start a reseller program for PFCLScan and we have started the plannng and recruitment process for this program. I have just posted a short blog on the PFCLScan website titled " PFCLScan Reseller Program ". If....[Read More]

Posted by Pete On 29/10/13 At 01:05 PM

Categories: Security Blogs

PFCLScan Version 1.3 Released

Pete Finnigan - Wed, 2015-04-22 00:05

We released version 1.3 of PFCLScan our enterprise database security scanner for Oracle a week ago. I have just posted a blog entry on the PFCLScan product site blog that describes some of the highlights of the over 220 new....[Read More]

Posted by Pete On 18/10/13 At 02:36 PM

Categories: Security Blogs

PFCLScan Updated and Powerful features

Pete Finnigan - Wed, 2015-04-22 00:05

We have just updated PFCLScan our companies database security scanner for Oracle databases to version 1.2 and added some new features and some new contents and more. We are working to release another service update also in the next couple....[Read More]

Posted by Pete On 04/09/13 At 02:45 PM

Categories: Security Blogs

Oracle Security Training, 12c, PFCLScan, Magazines, UKOUG, Oracle Security Books and Much More

Pete Finnigan - Wed, 2015-04-22 00:05

It has been a few weeks since my last blog post but don't worry I am still interested to blog about Oracle 12c database security and indeed have nearly 700 pages of notes in MS Word related to 12c security....[Read More]

Posted by Pete On 28/08/13 At 05:04 PM

Categories: Security Blogs

Cisco’s Collaborative Knowledge: Further blurring of higher ed & professional dev lines

Michael Feldstein - Tue, 2015-04-21 17:31

By Phil HillMore Posts (307)

Cisco, which at one time was the most valuable in the world, made an announcement that apparently got no one’s attention (outside of the venerable e-Literate). Cisco[1] released a new product, Collaborative Knowledge (CK), that is designed to allow companies to access real-time expertise and enable collaborative work based on employees’ expertise, or in another word, competencies. From the press release (because I cannot find an independent news article to reference):

To be positioned for growth, performance and productivity, organizations must transform into digital workplaces where knowledge sharing, learning and talent innovation are able to occur in real-time, anytime, anywhere.

Cisco Collaborative Knowledge integrates best-in-class consumer and business technologies to enable capabilities such as highly secure knowledge sharing, expert identification, continuous learning, social networking and analytics into one complete and end-to-end enterprise knowledge exchange. With Cisco Collaborative Knowledge, workers are able to benefit from these continuous learning features, helping organizations innovate and solve real-world business challenges.

Beyond the Buzzwords, What Is It?

The key description here is “knowledge sharing, expert identification, continuous learning, social networking and analytics”. The best way to conceptualize this product is by *not* viewing it as an LMS, which in corporate circles tends to be designed around formal learning programs and learning administrators needs. Like Instructure’s Bridge product, the new new Cisco offering is designed around end user needs, and it seems to be a very different approach – not knowledge management, but employee access to knowledge, learning and networking based on expertise.

dt000613

 http://dilbert.com/strip/2000-06-13

Unlike an LMS, Cisco CK attempts to leverage informal, or tacit, knowledge by building up profiles of employees that include endorsed knowledge maps.

Profile_example

In aggregate, a company builds up a knowledge map that allows employees to browse and search.

Company_knowledge_map

One of the core use cases is for an employee to do a context-sensitive universal search across employees, communities, libraries and training catalogs. Once there are users identified with the endorsed skills matching a search, there is built-in capability to contact that employee by phone, email, or WebEx virtual discussion.

Experts_with_Webex_button

In another tab of results, you can find communities – which include discussions, blogs & wikis.

Find_Communities

Community_example

This product seems to hit the right notes in terms of helping end users – employees – get their jobs done; contrary to historical learning or knowledge systems that feel like forcing employees to make some learning department’s job easier. What is probably the most interesting aspect to me, in terms of corporate knowledge & collaboration, is how a full implementation of Cisco CK would reorganize a company more along personal knowledge, networking and experience and away from hierarchies and linear control.

Use in Higher Ed

During the demo, the group mostly described usage of Cisco CK within companies, or perhaps as a nod to me being in the call, “also in higher ed”. To be honest, I don’t see that the straight-forward implementation of the product suite makes sense within a college or university. While the concept makes sense on paper, universities (especially faculty) are organized into semi-autonomous departments, divisions or colleges where cross-campus collaboration is not encouraged unless for a defined academic program. I could see faculty seeing this as a time sink, not wanting to be “catalogued” and not wanting people to be able to access them with one click. I could be wrong here, but it seems like a cultural mismatch.

I could see Cisco CK applied across a discipline-specific group, but in many cases it would be difficult to know who is the purchasing entity and who is administering the system. Cisco’s example video released along with the product announcement was based on the New York Academy of Sciences, a scientific society, which somewhat backs up this supposition. There might be other direct uses in education, but likely not in higher education institutions. Let me know if I’m missing something in the comments.

What is relevant to higher ed in my mind, however, is not the idea of institutional implementations but rather the set of uses that could be enabled by connecting to external data sources. During the demo the team described that the system does allow access to multiple sites, but some integrations are not there, yet. If a user searches for a particular skill or competency, one of the search results will include relevant sections of the training catalog. I believe the system is designed for the primary source to be the corporate LMS here. But what if the “catalog” includes continuing education courses offered by partner institutions? What about MOOCs targeted at professional development – particularly following the concept of the Open Education Alliance or Coursera’s Specializations?

This track seems to be a real opening for educational providers – whether institutions in a continuing education role or alternative providers – to more directly connect to employers and their money. The service might not just be for courses but also for external experts as shown in the video above. This move could further blur the line between higher education and professional development.

Training_catalog

Furthermore, let’s look at the knowledge map of each employee’s profile. Right now it seems set up to be an internal database, with Cisco providing an internal LinkedIn service for their customers. I asked if Cisco had plans to allow external definitions of the knowledge map, such as directly integrating to LinkedIn. They indicated ‘it is on the roadmap’. If that does happen, now you can see a direct mapping of actual competencies from someone’s education into company-endorsed expertise. You could be known within a company not just as ‘Sarah with an accounting degree working in corporate finance’, but as ‘Sarah with expertise on amortization and competitive analysis’.

I do not know enough about the corporate knowledge / training market to judge whether Cisco CK will be a success, but the product is intriguing. If they go down the path of integrating external data sources of training or education opportunities, and if they go down the path of acknowledging a LinkedIn definition of skills (or perhaps competencies coming from a CBE degree), then this announcement can be quite significant. This announcement would accelerate the move towards companies defining more from the demand-side on what educational opportunities they want for their employees and what skills or competencies they want from college graduates.

There is a growing movement among companies, especially technology companies, to value skills and competencies. What Cisco CK gives a view of is that this valuation is not just a matter of hiring college graduates. This valuation is moving into how a company operates and how employees are valued over time based on their acquired knowledge. Cisco CK also has the potential to offer a valuable marketplace for post-degree or alternative-to-degree education providers.

From a long-term perspective, count Cisco CK as a view towards a redefinition of what institutions and alternative educational providers produce as outputs – not just degrees and grades, but also skills and competencies and lifelong learning opportunities.

  1. Disclosure: Cisco, through a different division, is a client of MindWires Consulting.

The post Cisco’s Collaborative Knowledge: Further blurring of higher ed & professional dev lines appeared first on e-Literate.

PeopleSoft Interaction Hub Moving to Selective Adoption

PeopleSoft Technology Blog - Tue, 2015-04-21 16:15

The PeopleSoft Interaction Hub has been on the continuous release model for some time.  Now, we're taking the next step and moving the Hub to the Selective Adoption model.  All PeopleSoft applications have gone to Selective Adoption, and soon Interaction Hub customers will be able to take advantage of the this breakthrough release process and all the tools--like PeopleSoft Update Manager--that make it so powerful.

There is a paper on My Oracle Support that describes how customers should prepare for the move to Selective Adoption.  Go to the PeopleSoft Update Manager home page on MOS, then select the PeopleSoft Update Image Home Pages tab, and choose the Interaction Hub Update Image Page.  That page contains a link to a white paper that explains what customers need to do to get ready for using the PeopleSoft Update Manager and the Selective Adoption Process.  This page is also where Interaction Hub update images will be posted when they become available.

Why are we doing this?  The Selective Adoption process provides many benefits to customers.  This new process streamlines the maintenance and update process and employs virtualization and the latest PeopleSoft lifecycle tools to make the process faster, cheaper, and more efficient.

  • Updates are quicker, less expensive
  • Business lines benefit from the latest functionality resulting in higher ROI
  • Enhancements are delivered regularly, new business value provided incrementally, no long waits for enhancements
  • Eliminate major upgrades
  • Can be done on your schedule
  • Can retain strategic customizations
There are lots of resources on the PUM home page listed above.  Here is a good video overview of the Selective Adoption process.

Oracle and Docker

Floyd Teter - Tue, 2015-04-21 16:14
So, as many of you know, I've been working out different ways to host my Oracle labs and demos instances without chewing up phenomenal amounts of disk space and processing power.  Lately, I've been diving into Docker.

Docker has turned out to be pretty cool and very easy to learn.  And it's lightweight.  The idea is that you run containers - your app, your operating system, and your virtual machine - bundled together in a single container.  The big win is that containers abstract the operating kernel, so the total overhead of all the container is much, much less than the sum of the parts.

I'm still digging in, so I'll keep you posted as I progress.  But it looks pretty promising in terms of fulfilling my non-production needs.  One example...

I downloaded and installed the Oracle XE database...with APEX...from scratch in about 22 minutes earlier today.  All done with Docker (and because I run OS X, I also use boot2docker).


Game. Set. Match.  Pretty easy.  Run fast.  Low overhead.  You may want to check it out.

The investment is all about the PRODUCT, the PRODUCT, the PRODUCT.

Chris Warticki - Tue, 2015-04-21 13:46

Everyone wants to know where the 22% of the value of the Support investment goes?  I on the other hand, would like to know where the value of the 78% of the Product investment went?  It's quite easy to share with you the entire inventory of assets, resources, tools and best practices when it comes to the value of the support investment.

Let's talk Product for a moment. Take the Database for example.  Review the 40+ features in the Enterprise Edition.  How many are you using?  Most customers are using LESS THAN 10% of those features.  It's like having a very expensive excel spreadsheet one would like to call a database, and run very costly queries from it.  Are you capitalizing on the 9 Application Development Features?  Have you been running the same old SQL statements without taking advantage of the years worth of SQL improvements built-in to the the product?  Let's throw the Security card.  Nobody wants to wind up on the front page, nor on the 6 o'clock news because they just had a data breach.  But, are you utilizing the 6 Security Defense features?

Where am I headed with this?  Training.  That's where.
In the last decade, across the global economy, the first budget that was cut was training.  During my informal polls when working with our customers, they haven't taken a 3-5 day instructor-led class on the current version of the product in over 5-7 years!   I've met DBA's managing 12c the same way they installed and managed v6-7 of the RDBMS.

There is help. There are resources:

Back to the investment made in Oracle.  And, the re-investment.  2/3 of the Support investment goes back into R&D.  Billions of dollars every year.  Leverage the features and functions found within your Products.  Once you do, you gain efficiencies.  When efficiencies are gained, profitability is realized.  Oracle IS the example for using our own products, features and functions.  Oracle turned a dividend to the share holder in the company's 30th anniversary and continues to do so, in what some would term one of the most turbulent global economies ever.

The investment is in the Product, the Product, the Product.

Chris Warticki is member of the Global Customer Management team for Oracle Support Services
Tweeting @cwarticki

“Digital Disruption: It’s Not What You Think” by Oracle’s Rob Preston

Linda Fishman Hoyle - Tue, 2015-04-21 12:21

A Guest Post by Oracle’s Rob Preston, Editorial Director, Content Central (pictured left)

TechTarget’s Tom Goodwin uses Uber, Facebook, Alibaba, and Airbnb as examples of players with new tech-savvy business models that are disrupting their industries. Goodwin believes that our economic future lies in controlling the software interfaces that connect goods and services with the customer. In his mind, it’s all about the digital customer interface and things such as costly brick and mortar, physical goods, and intangible services are old school and irrelevant.

But Oracle’s Rob Preston contends (in a recent Forbes OracleVoice article) that digital disruption is much more than the customer interface. “It’s also about modernizing manufacturing and financial processes, overhauling entire supply chains, bringing more intelligence to marketing and sales strategies, making it easier for people and teams to collaborate, and rethinking talent recruitment and management,” he says. contends that digital disruption  runs much deeper than the customer interface.

Preston states that this disruption caused by cloud, mobile, data analytics, and social might not be as exciting as that caused by the likes of Uber and Airbnb, but it’s just as profound. Preston does a nice job of weaving in examples, industries, and insights into the article to make his case.

Productivity—A Priority for Every Business Leader

Linda Fishman Hoyle - Tue, 2015-04-21 12:12

A Guest Post by Caesar Peter (pictured left), Senior Director, CX Sales Applications Group

Oracle CloudWorld was held on the first week of April in India. Interestingly, the summit coincided with the new financial year in India, and hence, performance strategy was top of mind for almost every business leader with whom I met.

Generally, the most popular discussions during these events are about setting a higher sales budget, IT applications, project status, economy, etc. In contrast to this popular approach, it was wonderful to note that most of the business leaders spoke about improving the employees' productivity, spearheading the change right from their sales folks.

In fact, the head of sales of a leading consumer goods organization even took the time and described what he meant by productivity:

  1. Complete more activities in the limited (stipulated) time
  2. Ensure increased outcome in the activities

After hearing this, a senior person in the banking industry not only agreed, but also shared that their bank is in the process of evaluating a new set of modern, customer-facing tools that could facilitate their employees’ productivity. He shared that their criteria of evaluation are 1) availability (100 percent mobile, access from anywhere, everywhere), 2) agility (simple and easy to use), and 3) intelligent insights (alerts, reminders, notifications, and infolets).

As we all know, improving sales productivity can bring incredible value to the organization. For example, a 15 percent increase in the productivity of employees could add up to 10 percent to 15 percent improvement in the bottom line of the overall business performance. It is indeed wonderful to watch these leaders and organizations move from knowing how important sales productivity is to knowing how to make these improvements happen.

Find an Oracle CloudWorld event near you.

Conformed Dimension and Data Mining

Dylan's BI Notes - Mon, 2015-04-20 20:48
I believe that Conformed Dimensions are playing a key roles in data mining.  Here is why: A conformed dimension can bring the data together from different subject area, and sometime, from different source system. The relevant data can be thus brought together.  Data Mining is a technique to find the pattern from the historical data. […]
Categories: BI & Warehousing

Enable Real Application Security (RAS) in APEX 5.0

Dimitri Gielis - Mon, 2015-04-20 15:25
Oracle Database 12c introduced Oracle Real Application Security (RAS), the next generation Oracle Virtual Private Database (VPD). In APEX 5.0 RAS is declaratively build-in. Follow the below steps to enable it:

Login to the INTERNAL workspace and go to Manage Instance > Security:

In the Real Application Security section set Allow Real Application Security to Yes.

Next login to the Workspace your Application is build in and go to your Authentication Scheme.
You'll see a new section in there called Real Application Security.



The dropdown has following possibilities:

  • Disabled: Real Application Security does not get used in the application. 
  • Internal Users: APEX creates a RAS session and assumes that all users are internal and passes false via the is_external parameter to dbms_xs_sessions.assign_user. 
  • External Users: RAS session created and true gets passed via the is_external parameter to dbms_xs_sessions.assign_user. 

The last two options enable RAS Mode and make the Dynamic Roles and Namespaces shuttle available. (from the help in APEX) Make sure that the users get privileges to access the application's schema objects. For External Users, you can for example grant database privileges to a RAS Dynamic Application Role and configure it in this authentication scheme as a Dynamic Role. You can also enable roles via a call to apex_authorization.enable_dynamic_groups, e.g. in a Post-Authentication procedure.

You can read more about Oracle Real Application Security and view an entire example how to set up RAS at the database side. I'm still learning about all the RAS features myself too, but thought to already share the above. I plan to include a chapter in my e-book about RAS and APEX 5.0 with a real-case example, as I see a big benefit for using it in a highly regulatory, secure and audited environment.


Categories: Development

Keeping Cassandra Alive

Pythian Group - Mon, 2015-04-20 12:28

Troubleshooting Cassandra under pressure

This is the second blog post in the series. This is a bit more technical than the first one. I will explain some things that can be made to keep a cluster/server running when you are having problems in that cluster.

There were a lot of changes in Cassandra over the last 4 years (from 0.8 to 2.1), so I will refrain from discussing troubleshooting problems that affect some specific versions. Also, this is the kind of troubleshooting you need when you can’t “add a node”.

Why can’t I just add a node? Well, if you aren’t on vnodes, and you didn’t pre-calculate the token ranges, adding a node is a big effort. Other constrains may also apply, like budget or deliver time for hardware (if you are on bare metal). Plus, rack capacity, power constrains, etc…

Now you may say:

“Ok, we can’t add a node! What should we do?! We have a storm coming!”

So, I did navigate over that storm and it’s not an easy task, but it’s doable! First thing, you have to know what you have, that is critical! You also need to know where you can take more damage.

Let’s assume you have the following situation, and what I recommend for it:

  • Heavy Write Cluster, Low Read

Now let’s define “storm”: A storm is not when when Cassandra fails, it’s about an unanticipated load increase or a disaster. What happens is that you have more load than your planned capacity (Either because of failure of nodes or because of a sudden traffic increase). This will increase your resource usage to a point where your machines will start to die.

Let’s understand what can cause a Cassandra process to die, and a probably the machine (If you OOM and you didn’t configure swap… I warned you!) for the scenario described above.

  1. More data to the commitlog = more I/O pressure (Discard if you have commitlog on a different HDD)
  2. Data is written to memtables = Memory is used
  3. Memtables reach thresholds faster, get flushed to disk = I/O pressure
  4. Compaction starts faster and frequently = I/O pressure, CPU pressure
  5. Too many I/O compaction can’t compact fast enough and the memtables aren’t flushing fast enough = Memory not being released.
  6. Too much memory usage, JVM triggers GC more frequently = CPU pressure
  7. JVM can’t release memory = OOM
  8. OOM = PUM! Node dies (if you are “lucky” kernel will kill Cassandra)

And I didn’t go trough the hints that would be stored as nodes became unresponsive and send out once they get back online.

So now we know where our pain points are. Let’s understand them and see what we can do about it:

  • Commitlog – Let’s just assume you have this on separate HDD, and don’t do anything about it (after all it’s your safeguard).
  • Memtables – We can control how often they are flushed. It is a possible tweak point. Although it requires a Cassandra restart for the changes to produce an effect.
  • Compaction – This we can control via nodetool, inclusive we can disable it in the later versions.
  • JVM GC – We can change settings, but difficult to tweak and a restart is needed.
  • Swap – We can play a bit here if we do have a swap partition.
  • Dirty_ratio – How often the data is actually written to the HDD. This can put your data at risk, but also can help.
  • Replication Factor – this can be changed on the fly, will help by having less pressure on the nodes.

So, what do to? Where to start? It depends on a case by case scenario. I would probably make my Read performance suffer to keep the writes getting in. To allow that, the easiest way should be making the reads CL = ONE. That sometimes does look like the fast and easy option. But if you’re writes are not using Quorum or/and you have read_repair… You will spread more writes (And RF>1). I would pick compaction as my first target, you can always try to get it up to pace (re-enable, increase compaction throughput). Another option would be increase dirty_ratio and risk losing data (trusting the commitlogs + RF>1 helps not losing data) but this will give your HDD more room until the cluster recovers.

But every case is a case. I will talk about my own case, problems and resolutions this Wednesday at the Datastax Day in London! Fell free to join me!

 

Categories: DBA Blogs

Pillars of PowerShell: Debugging

Pythian Group - Mon, 2015-04-20 12:09
Introduction

The is the third blog post continuing the series on the Pillars of PowerShell. The first two post are:

  1. Interacting
  2. Commanding

We are going to take a bit of a jump and dig into a particular topic that I think is better to go over up front, instead of later. In this post I want to go over a few things of how you can debug scripts or just issues in PowerShell. This is a topic that can get very advanced and make for a very long blog post. In place of trying to put all that in one blog post, I have a few links that I am going to share at the end of this post that will point you to some of the more deep dive material on debugging.

Pillar 3: Debugging

When it comes to writing scripts or developing T-SQL procedures you will generally see folks use print statements to either check where the processing is at in the script, or output some “additional” information. PowerShell is no different and offers cmdlets that you can output it to with various destinations and to even use it to make a decision. One of the main ones I like to use when I write scripts is Write-Verbose. You may see some folks use Write-Host in their scripts, and all I can say to that is, “be kind to puppies”. The basic gist of it is Write-Host outputs plain text, and will always output text unless you comment it out or remove it from your script. In using Write-Verbose you can actually have that information only output when a parameter switch is used, rightly called “-verbose”. This switch is included in most built-in cmdlets for modules provided by Microsoft. If you want to include it in your script or function you simply need to include this at the top:

[CmdletBinding()]
Param()

So in the example below you can see that both functions will never output the Write-Verbose cmdlet when they are called:

p3_function_verbose_example

The difference you will see is that Test-NoVerbose does not do anything when you include the verbose switch, where Test-WithVerbose will:

p3_function_verbose_example_2

So in cases where other people may be using your scripts this feature will help keep output clean, unless you need it for debugging. I tend to use this most often when I am working on long scripts that I want to initially know what is going on as it runs. If I ever have to come back to the script for debugging I can just use the switch, versus the normal execution which doesn’t need all that output.

Errors

They are going to happen, it is inevitable in your scripting journey that at some point you are going to have an error. You cannot always prepare for every error but you can help in collecting as much information about an error to help in debugging. Just like you would handle errors in T-SQL using a TRY/CATCH block, you can do the same in PowerShell.

PowerShell offers a variable that is available in every session you open or run called $Error. [The dollar sign in PowerShell denotes a variable.] This variable holds records of the errors that have occurred in your session. This variable is going to hold those errors that can occur in your scripts. There are other errors or exceptions that can also be thrown by .NET objects that can work a bit different in how you capture them; I will refer you to Allen White’s post on Handling Errors in PowerShell to see a good example.

Summary

Debugging is one of those topics that can go into a 3-day course so one blog post is obviously not going to cover all the information you might need. I came across a good blog post by the PowerShell Team on Advanced Debugging in PowerShell that should be a read for anyone wanting to get involved with PowerShell scripting.

Categories: DBA Blogs