- Ready or Not: Applying Secure Configuration to Oracle E-Business Suite (26 minutes)
It's a new world - one where secure configuration is no longer optional and you must reduce your attack surface. Eric Bing, Senior Director Product Development, shares that going forward, many Oracle E-Business Suite security features will now be turned on by default. To further assist you with deploying Oracle E-Business Suite securely, we are now providing a Secure Configuration Management console. Under certain conditions, access to Oracle E-Business Suite will be limited until your Applications DBA or System Adminstrator corrects or acknowledges the errors and warnings in the console. Come to this session to learn about the new Secure Configuration Management console and our guidelines for auditing, monitoring and securing your Oracle E-Business Suite environment and sensitive data. This material was presented at Oracle OpenWorld 2016.
- Partner Webcast - Oracle Storage Cloud Services: Enabling Enterprises Evolve (OPN Innovation and Modernisation Center (EMEA))
via OPN Innovation and Modernisation Center (EMEA) http://ift.tt/1AAiVSD
- Partner Webcast - Docker Agility in Cloud: Introducing Oracle Contain…
via Blogs.Oracle.com/IMC - Slideshows by User: oracle_imc_team http://ift.tt/gEA7C8
- Sponsored: 64% off Code Black Drone with HD Camera
Our #1 Best-Selling Drone--Meet the Dark Night of the Sky!
5. Virtual Yoga Business
6. Rehab Marketing Firm
- Company Name
- Full Address (Address, City, State, Zip)
- Phone Number
- Latitude and Longitude
- Ratings and Reviews
- Email Addresses on their website
- Competitor Mentions
- Specific Terms on their website
- Social Media Platforms they are on (Facebook, Twitter, Pinterest, Google Plus, etc)
- Is their website mobile enabled?
- Do they have a "current" (HTML5) website?
- Do they mention your company?
There is SO much information that we can glean from a company's website! Use your imagination and let NorthStar do the work! This client asked us to get them a list of companies specializing in revenue cycle management. That's an easy term to search for. We added "back office" into our search terms too since that's sometimes used. The category did provide us with just over 300 people in the billing category. These terms are even more effective at narrowing down your prospects!
Real Estate Infomercial
The "Direct Response" (DR) world is an interesting world full of infomercials geared at getting to consumers, typically to buy their goods or services. The real estate world has had a number of people who focused on the DR space - many of them "bad" (i.e. they just wanted your money, not to really help you). A new entre into the DR space approached us about their business. One side is the B2C (business to consumer) side, which infomercials and other solutions like StarStar are excellent at helping. The other side of the business is that they plan to deliver their services through existing Realtors. In fact, the Realtors could be customers. Real estate agencies are registered businesses that we can identify with NorthStar.
Here's what the distribution looks like across the US:
Cannabis Regulatory Compliance BusinessOne of my good friends is in this business. They help the medical and recreational cannabis businesses keep current with their paperwork. We searched 3 different categories (cannabis, alternative clinics, and marijuana to find these businesses. We then searched each site for terms such as cannabis and marijuana. The most important piece of data for them at this point is an email address. They also wanted to know how many shops we could find in the US, but they wanted to start in Colorado. Overall we found 6,447 email addresses. Limiting it down to Colorado, we came up with only 274 records. They said that calling doesn't work (the workers are too stoned), so an email is critical for them.
As many of you know, prior to this venture, I was in the subscription and transactional video on demand business. We had a number of clients in the fitness business. Subscription businesses are recurring revenue businesses that keep on giving (i.e. generating revenue) once you get customers signed up. In fact, if you have a $10/mo product and you add 10 new customers a day for a year, your ARR will be over $400k with a low churn rate. That's powerful!Yoga businesses are typically recurring revenue businesses - people pay month after month. So why not create a subscription Yoga business that you can attend right from home? Why build your own Yoga studio to this? Why not just put cameras in existing Yoga studios and allow them to post their live video sessions for anyone in the world (who has a subscription) to attend the class?
Rehab Marketing Firm
The Cost is too high while selecting the records from the Staging Table named as JDSU_RPR_ORDER_HEADERS_STG
When you have a classic report in Oracle Application Express (APEX) and want to make it searchable you typically add a Text Item in the region, enable Submit on Enter and add a WHERE clause to your SQL statement.
Here’s an example:
Your SQL statement probably looks like this:
where CUSTOMER_ID = :P4_SEARCH
When you want to search for multiple customers separated by a comma, how do you do that?
So in my search field I add for example: 1,2,3 and expect to see 3 customers.
There’re a couple of options you have, I’ll list three below:
where INSTR(','||:P4_SEARCH||',', ',' || CUSTOMER_ID || ',') > 0
where REGEXP_LIKE(CUSTOMER_ID, '^('|| REPLACE(:P4_SEARCH,',','|') ||')$')
where customer_id in to_number((
select regexp_substr(:P4_SEARCH,'[^,]+', 1, level)
connect by regexp_substr(:P4_SEARCH, '[^,]+', 1, level) is not null
Which one to choose? It depends what you need… if you need readability, maybe you find INSTR easier to understand. If you need performance, maybe the last option is the better choice… so as always it depends. If you want to measure the performance you can look at the Explain Plan (just copy the SQL in SQL Workshop and hit the Explain tab).
The Explain Plan for the first SQL looks like this:
The Explain Plan for the last SQL looks like this:
The above technique is also useful when you use want checkboxes above your report, so people can make a selection. For example we select the customers we want to see:
The where clause would be identical, but instead of a comma (,) you would use a colon (:), so the first statement would be:
where INSTR(':'||:P4_SEARCH||':', ':' || CUSTOMER_ID || ':') > 0
Happy searching your Classic Report :)
I ran across a stackoverflow question and it gave me an idea for a simpler use of Python to graph some Oracle database performance information. I looked at my PythonDBAGraphs scripts and I’m not sure that it is worth modifying them to try to simplify those scripts since I like what they do. But they may make people think that Python scripts to graph Oracle performance data are difficult to write. But, I think if someone just wants to put together some graphs using Python, Matplotlib, and cx_Oracle they could do it more simply than I have in my PythonDBAGraphs scripts and it still could be useful.
Here is an example that looks at db file sequential read waits and graphs the number of waits per interval and the average wait time in microseconds:
import cx_Oracle import matplotlib.pyplot as plt import matplotlib.dates as mdates connect_string = "MYUSER/MYPASSWORD@MYDATABASE" con = cx_Oracle.connect(connect_string) cur = con.cursor() query=""" select sn.END_INTERVAL_TIME, (after.total_waits-before.total_waits) "number of waits", (after.time_waited_micro-before.time_waited_micro)/ (after.total_waits-before.total_waits) "ave microseconds" from DBA_HIST_SYSTEM_EVENT before, DBA_HIST_SYSTEM_EVENT after, DBA_HIST_SNAPSHOT sn where before.event_name='db file sequential read' and after.event_name=before.event_name and after.snap_id=before.snap_id+1 and after.instance_number=1 and before.instance_number=after.instance_number and after.snap_id=sn.snap_id and after.instance_number=sn.instance_number and (after.total_waits-before.total_waits) > 0 order by after.snap_id """ cur.execute(query) datetimes =  numwaits =  avgmicros =  for result in cur: datetimes.append(result) numwaits.append(result) avgmicros.append(result) cur.close() con.close() title="db file sequential read waits" fig = plt.figure(title) ax = plt.axes() plt.plot(datetimes,numwaits,'r') plt.plot(datetimes,avgmicros,'b') # Format X axis dates fig.autofmt_xdate() ax.fmt_xdata = mdates.DateFormatter('%m/%d/%Y %H:%M') datetimefmt = mdates.DateFormatter("%m/%d/%Y") ax.xaxis.set_major_formatter(datetimefmt) # Title and axes labels plt.title(title) plt.xlabel("Date and time") plt.ylabel("num waits and average wait time") # Legend plt.legend(["Number of waits","Average wait time in microseconds"], loc='upper left') plt.show()
The graph it produces is usable without a lot of time spent formatting it in a non-standard way:
It is a short 68 line script and you just need matplotlib and cx_Oracle to run it. I’ve tested this with Python 2.
In my previous blog post, I talked about SQL Server on Linux and high availability. During my test, I used a NFS server to share disk resources between my cluster nodes as described in the Microsoft documentation. A couple of days ago, I decided to add a fourth node (LINUX04) to my cluster infrastructure and I expected to do this work easily. But no chance, I faced a problem I never had before on this infrastructure.
Switching over this last node led to a failed SQL Server FCI resource. After digging into the problem, I found out the root from the SQL Server error log as shown below:
[mikedavem@linux04 ~]$ sudo cat /var/opt/mssql/log/errorlog 2017-02-12 18:55:15.89 Server Microsoft SQL Server vNext (CTP1.2) - 18.104.22.168 (X64) Jan 10 2017 19:15:28 Copyright (C) 2016 Microsoft Corporation. All rights reserved. on Linux (CentOS Linux 7 (Core)) 2017-02-12 18:55:15.89 Server UTC adjustment: 0:00 2017-02-12 18:55:15.89 Server (c) Microsoft Corporation. 2017-02-12 18:55:15.89 Server All rights reserved. 2017-02-12 18:55:15.89 Server Server process ID is 4116. 2017-02-12 18:55:15.89 Server Logging SQL Server messages in file 'C:\var\opt\mssql\log\errorlog'. 2017-02-12 18:55:15.89 Server Registry startup parameters: -d C:\var\opt\mssql\data\master.mdf -l C:\var\opt\mssql\data\mastlog.ldf -e C:\var\opt\mssql\log\errorlog 2017-02-12 18:55:15.91 Server Error: 17113, Severity: 16, State: 1. 2017-02-12 18:55:15.91 Server Error 2(The system cannot find the file specified.) occurred while opening file 'C:\var\opt\mssql\data\master.mdf' to obtain configuration information at startup. An invalid startup option might have caused the error. Verify your startup options, and correct or remove them if necessary. 2017-02-12 18:55:15.91 Server SQL Server shutdown has been initiated
Well, the error speaks for itself and it seems I’m concerned by a file access permission in my case. My first reflex was to check the corresponding permissions on the corresponding NFS folder.
[mikedavem@linux04 ~]$ sudo ls -lu /var/opt/mssql/data To.al 53320 drwxr-----. 2 995 993 4096 Feb 14 23:12 lost+found -rwxr-----. 1 995 993 4194304 Feb 14 23:19 master.mdf -rwxr-----. 1 995 993 2097152 Feb 14 23:19 mastlog.ldf -rwxr-----. 1 995 993 8388608 Feb 14 23:19 modellog.ldf -rwxr-----. 1 995 993 8388608 Feb 14 23:19 model.mdf -rwxr-----. 1 995 993 13959168 Feb 14 23:19 msdbdata.mdf -rwxr-----. 1 995 993 786432 Feb 14 23:19 msdblog.ldf drwxr-----. 2 995 993 4096 Feb 14 23:08 sqllinuxfci -rwxr-----. 1 995 993 8388608 Feb 14 23:19 tempdb.mdf -rwxr-----. 1 995 993 8388608 Feb 14 23:19 templog.ldf
According to the output above we may claim this is a mismatch issue between uids/guids of the mssql user across the cluster nodes. At this stage, I remembered performing some tests including creating some linux users before adding my fourth node leading to create a mismatch for the mssql user’s uids/gids. Just keep in mind that the SQL Server installation creates a mssql user by default with the next available uid/gid. In my case uid and guid.
Let’s compare mssql user uid/gid from other existing nodes LINUX01 / LINUX02 and LINUX03:
[mikedavem@linux04 ~]$ id mssql uid=997(mssql) gid=995(mssql) groups=995(mssql) [mikedavem@linux04 ~]$ ssh linux01 id mssql … [root@linux04 ~]# ssh linux01 id mssql uid=995(mssql) gid=993(mssql) groups=993(mssql) … [root@linux04 ~]# ssh linux02 id mssql uid=995(mssql) gid=993(mssql) groups=993(mssql) … [root@linux04 ~]# ssh linux03 id mssql uid=995(mssql) gid=993(mssql) groups=993(mssql)
Ok this explains why I faced this permission issue. After investing some times to figure out how to get rid of this issue without changing the mssql user’s uid/guid, I read some discussions about using NFS4 which is intended to fix this uids/gids mapping issue. It seems to be perfect in my case! But firstly let’s just confirm I’m using the correct NFS version
[mikedavem@linux04 ~]$ mount -v | grep nfs nfsd on /proc/fs/nfsd type nfsd (rw,relatime) 192.168.5.14:/mnt/sql_data_nfs on /var/opt/mssql/data type nfs (rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.5.14,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=192.168.5.14)
Well, my current configuration is not ready to leverage NFS4 yet and some configuration changes seem to be required to address it.
Firstly, let’s change fstype parameter of my FS resource to nfs4 to mount the NFS share with NFS4.
[mikedavem@linux04 ~]$ sudo pcs resource show FS Resource: FS (class=ocf provider=heartbeat type=Filesystem) Attributes: device=192.168.5.14:/mnt/sql_data_nfs directory=/var/opt/mssql/data fstype=nfs Operations: start interval=0s timeout=60 (FS-start-interval-0s) stop interval=0s timeout=60 (FS-stop-interval-0s) monitor interval=20 timeout=40 (FS-monitor-interval-20) [mikedavem@linux04 ~]$ sudo pcs resource update FS fstype=nfs4 [mikedavem@linux04 ~]$ sudo pcs resource restart FS FS successfully restarted
Then I had to perform some updates to my idmap configuration on both sides (NFS server and client as well) to make the mapping working correctly. The main steps were as follows:
- Enabling idmap with NFS4 (disabled by default in my case)
- Changing some parameters inside the /etc/idmap.conf
- Verifying idmap is running correctly.
[root@nfs sql_data_nfs]# echo N > /sys/module/nfsd/parameters/nfs4_disable_idmapping … [root@nfs sql_data_nfs]# grep ^[^#\;] /etc/idmapd.conf [General] Domain = dbi-services.test [Mapping] Nobody-User = nobody Nobody-Group = nobody [Translation] Method = static [Static] firstname.lastname@example.org = mssql email@example.com = testp … [root@nfs sql_data_nfs]# systemctl status nfs-idmap . nfs-idmapd.service - NFSv4 ID-name mapping service Loaded: loaded (/usr/lib/systemd/system/nfs-idmapd.service; static; vendor preset: disabled) Active: active (running) since Wed 2017-02-15 20:29:57 CET; 1h 39min ago Process: 3362 ExecStart=/usr/sbin/rpc.idmapd $RPCIDMAPDARGS (code=exited, status=0/SUCCESS) Main PID: 3364 (rpc.idmapd) CGroup: /system.slice/nfs-idmapd.service └─3364 /usr/sbin/rpc.idmapd
At this point, listening user permissions shows nobody/nobody meaning translation is not performed yet.
[root@linux04 ~]# ls -lu /var/opt/mssql total 16 drwxr-----. 2 nobody nobody 4096 Feb 15 19:00 data …
I forgot to create a corresponding mssql user on the NFS service side. Let’s do it:
[root@nfs sql_data_nfs]# groupadd mssql -g 993 [root@nfs sql_data_nfs]# useradd -u 995 -g 993 mssql –M
After remounting the NFS share, I finally got the expected output as shown below:
[root@linux04 ~]# mount -o remount -t nfs4 192.168.5.14:/mnt/sql_data_nfs/sqllinuxfci /mnt/testp/ … [root@linux04 ~]# ls -lu /var/opt/mssql total 16 drwxr-----. 2 mssql mssql 4096 Feb 15 19:00 data … [root@linux04 ~]# ls -lu /var/opt/mssql/data/* -rwxr-----. 1 mssql mssql 4194304 Feb 15 19:53 /var/opt/mssql/data/master.mdf -rwxr-----. 1 mssql mssql 2097152 Feb 15 19:53 /var/opt/mssql/data/mastlog.ldf -rwxr-----. 1 mssql mssql 8388608 Feb 15 19:53 /var/opt/mssql/data/modellog.ldf -rwxr-----. 1 mssql mssql 8388608 Feb 15 19:53 /var/opt/mssql/data/model.mdf -rwxr-----. 1 mssql mssql 13959168 Feb 15 19:53 /var/opt/mssql/data/msdbdata.mdf -rwxr-----. 1 mssql mssql 786432 Feb 15 19:53 /var/opt/mssql/data/msdblog.ldf -rwxr-----. 1 mssql mssql 8388608 Feb 15 19:53 /var/opt/mssql/data/tempdb.mdf -rwxr-----. 1 mssql mssql 8388608 Feb 15 19:53 /var/opt/mssql/data/templog.ldf
This time the translation is effective but let’s perform another test by running the previous command as the mssql user
[root@linux04 ~]# runuser -l mssql -c 'ls -lu /var/opt/mssql/data/*' ls: cannot access /var/opt/mssql/data/*: Permission denied
The problem starts when I try to access the database files despite the correct mapping configuration. I spent some time to understand that some misconceptions about how NFSv4 and magic mismatch uids/gids fix subsist. I admit the main documentation is not clear about it but please, feel free to comment if it is not the case. After digging into further pointers, I was able to understand that NFS itself doesn’t achieve authentication but delegates it down to the RPC mechanism. If we take a look down at the RPC’s security, we may notice it hasn’t been updated to support such matching. Basically, it continues to use the historic authentication called AUTH_SYS meaning sending uids/gis over the network. Translation work comes later through the idmap service. The only way to get rid of this issue would be to prefer another protocol like RPCSEC_GSS which includes authentication based on LDAP or Kerberos for example.
The bottom line here is that SQL Server on Linux is not an exception of course. If we want to continue using basic Unix authentication, keeping synchronizing uids and guids across my cluster nodes seems to be a good way to go. Using Kerberos authentication in this case? This is another story that I will try to tell in another blog post!
Happy clustering on Linux!
Cet article SQL Server Failover Cluster on Linux and synchronizing uids/gids across nodes est apparu en premier sur Blog dbi services.
Is this the sort of column that gets inserted into not-null as a flag that some task needs to be run against it? The task would then set this column to null to mark it as processed. If you had a big batch to run, or the task hadn’t run in some time (looking at your 41K buffers, that must be quite a lot of rows!) then the index would have grown – setting the column back to null isn’t going to shrink it again on it’s own but will mean the index can shrink if asked to.
The same sort of thing happens with AQ tables, except the rows get deleted (although that can some time after processing)
I find it interesting that BI Publisher is mostly known for the creation of pixel perfect repeating forms (invoices, labels, checks, etc) and its ability to bursting them. To me, BI Publisher is the best kept secret for the most challenging reports known to mankind.
In my last blog - https://www.rittmanmead.com/blog/2017/02/financial-reports-which-tool-to-use-part-1/, I discussed some of the challenges of getting precisely formatted financial reports in OBIEE, as well as some pros and cons of using Essbase/HFR. Although we can work through difficult solutions and sometimes get the job done, BI Publisher is the tool that easily allows you to handle the strangest requirements out there!
If you have OBIEE, then you already have BI Publisher, so there is no need to purchase another tool. BI Publisher comes integrated with OBIEE, and they can both be used from the same interface. The transition between BI Publisher and OBIEE is often seamless to the user, so you don’t need to have concerns over training report consumers in another tool, or even transitioning to another url.
The BIP version that comes embedded with OBIEE 12c comes loaded with many more useful features like encryption and delivering documents to Oracle Document Cloud Service. Check out the detailed new features here: http://www.oracle.com/technetwork/middleware/bi-publisher/new-features-guide-for-12-2-1-1-3074557.pdf
In BI Publisher, you can leverage data from flat files, from different databases, from an Essbase cube, from the OBIEE RPD, from one (or multiple) OBIEE analyses, from web services and more:
So, if you already have very complex OBIEE analyses that you could not format properly, you can use these analyses, and all the logic in them, as sources for your perfectly formatted BI Publisher reports.
Every BI Publisher report consists of three main components:
Data Model - data source that you will use across one or more reports
Layout(s) - which will define how your data is presented
Properties - which are related to how it generates, displays and more
You start a BI Publisher project by creating a data model that contains the different data sets that you would like to use on your report (or across multiple reports). These data sets, which reside inside of your data model, can be of the same source or can come from multiple sources and formats. If you regularly use OBIEE, you can think of a data model as the metadata for one or more reports. It is like a very small, but extremely flexible and powerful RPD.
Inside the data model you can connect your data sets using bind variables (which creates a hierarchical relationship between data sets), or you can leave them completely disconnected. You can also connect some of your data sets while leaving others disconnected.
The most impressive component of this tool is that it will allow you to do math from the results of disconnected data sets, without requiring ETL behind the scenes. This may be one of the requirements of a very complex financial report, and one that is very difficult to accomplish with most tools. The data model can extract and transform data within a data set, or extract only, so that it can later be transformed during your report template design!
For example, within a data set, you can create new columns to suit most requirements - they can be filtered, concatenated, or have mathematical functions applied to them, if they come from the same data source.
If they do not come from the same source, you can transform your data using middle tier systems, such as Microsoft Word during your template creation. You can perform math and other functions to any result that comes from any of your data sets using an RTF template, for example.
The example above was mentioned in Part 1 of this blog. It was created using BI Publisher and represents what I would call a "challenging report" to get done in OBIEE. The data model in this example consisted of several OBIEE analyses and their results were added/subtracted/multiplied as needed in each cell.
This second example was another easy transition into BI Publisher: the entire report contained 10 pages that were formatted entirely differently, one from the other. Totals from all pages needed to be added in some specific cells. Better yet, the user entered some measures at the prompt, and these measures needed to be accounted for in every sub-total and grand total. You may be asking: why prompt for a measure? Very good question indeed. In this case, there were very few measures coming from a disconnected system. They changed daily, and the preferred way for my client to deal with them was to enter them at the prompt.
So, do you always have to add apples to apples? Not necessarily! Adding apples and bananas may be meaningful to you.
And you can add what is meaningful with BI Publisher!
For example, here is a sample data model using sources from Excel, OBIEE and a database. As you see, two of these data sets have been joined, while the other two are disconnected:
A data model such as this one would allow you to issue simultaneous queries across these heterogeneous sources and combine their results in the report template. Meaning, you can add anything that you would like in a single cell. Even if it involves that measure coming from the prompt! Goes without saying, you should have the exact purpose and logic behind this machination.
Once your data model is complete: your data sets are in place, you have created the relationships within them (where applicable), you created custom columns, created your parameters and filters, then you generate some sample data (XML) and choose how you will create your actual report.
As I mentioned, there are additional functionalities that may be added when creating the report, depending on the format that you choose for your template:
One very simple option is to choose the online editor, which has a bit more limited formatting capability, but will allow you to interact with your results online.
In my experience, if I had to cross the bridge away from OBIEE and into BI Publisher, it is because I needed to do a lot of customization within my templates. For those customizations, I found that working with RTF templates gave me all the additional power that I could possibly be missing everywhere else. Even when my financial report had to be read by a machine, BI Publisher/RTF was able to handle it.
The power of the BI Publisher data model combined with the unlimited flexibility of the RTF templates was finally the answer to eliminate the worst excel monsters. With these two, you can recreate the most complex reports, and do it just ONCE - not every month. You can use your existing format - that you either love, or are forced to use for some reason - and reuse it within the RTF. Inside of each RTF cell, you define (once!) what that cell is supposed to be. That specific cell, and all others, will be tested and validated to produce accurate results every month.
Once this work is done, you are done forever. Or well, at least until the requirements change… So, if you are battling with any one of these monsters on a monthly basis, I highly encourage you to take a step forward and give BI Publisher a try. Once you are done with the development of your new report, you may find that you have hours per month back in your hands. Over time, many more hours than what you spent to create the report. Time worth spending.
I wrote some months ago several blog posts about the new Container feature of Windows Server 2016. Here is the list:
Today, I will install the Container feature, install Docker and deploy a container.
First of all, I need to enable the Container and the Hyper-V feature, take care if you use VirtualBox because after having enable Hyper-V this won’t work anymore.
Windows Server 2016 support only Hyper-V container and no more Windows Container. So I will check if those features are already enable on my server and if it is not the case enable its, don’t forget to map the Windows Server 2016 iso file to your Virtual Machine.
To do it just run this PowerShell cmdlet:
Now, both features are installed in my VM and I can install Docker.
To do so I will download the Docker Engine and Client from the Docker project library here to the folder c:\Temp and unzip the file into c:\ProgramFiles:
I have my Docker folder with Docker executable files and binaries. Dockerd.exe for the Docker engine and docker.exe for the client:
I add also the Docker path to the path environment variable:
Optionally, we could add it forever:
I will install Docker as a Service and start it:
Docker is installed and started, I’m now able to use it. I don’t have any images for the moment but the Microsoft/nanoserver image is available in the Docker Hub:
Let’s download this Nano Server base OS image from the Hub:
Now that I have my Docker OS image, I will start an interactive session with this image:
The container starts and we I accessed to the command prompt where I could check the processes running in my container like PowerShell, cmd…:
I will create a PowerShell script in my container to write a Welcome Container message and exit from my container:
I can now see my new container with the following command:
I will now create a new image from my container’s changes named mywelcomecontainer (it’s not possible to use Upper case for the new container name otherwise you will receive this error “repository name must be lowercase”):
I can finally run my container, it means that a Hyper-V container will be created from my new image named mywelcomecontainer and my PowerShell script will be executed from my container:
As I avoid the Docker run option –rm, I still have my container available if I run a Docker ps –a command. In order to delete my container I can run a Docker rm <containerid>, rmi will delete images if needed.
To conclude, it looks very easy to create images and Hyper-V containers with Docker in Windows Server 2016. The power of Docker is now available in the Windows World and for sure will be used more and more commonly.