Quantcast
Channel: VMware Communities : Blog List - All Communities
Viewing all 3805 articles
Browse latest View live

vCenter Server Appliance 6.0 (VCSA) を Datadog に登録してみる。

$
0
0

ためしに vCenter Server Appliance 6.0 (VCSA)  を、Datadog に登録してみました。

 

Datadog は、Datadog 社による 性能情報やイベントのモニタリングができるクラウドサービスです。

 

VMware vSphere のモニタリングにも対応しているようです。

Datadog-VMware Integration

Datadog-VMware Integration  (英語)

https://www.datadoghq.com/blog/unified-vsphere-app-monitoring-datadog/ (英語)

 

今回の環境について。

今回は、VCSA 6.0 U3 を使用しています。

Command> system.version.get

Version:

   Product: VMware vCenter Server Appliance

   Installtime: 2017-04-02T04:55:33 UTC

   Summary: VMware vCenter Server Appliance 6.0 Update 3

   Releasedate: February 23, 2017

   Version: 6.0.0.30000

   Build: 5112509

   Type: vCenter Server with an embedded Platform Services Controller

 

VCSA 6.0 は、SUSE Linux をベースとしています。

Command> shell.set --enabled True

Command> shell

    ---------- !!!! WARNING WARNING WARNING !!!! ----------

Your use of "pi shell" has been logged!

The "pi shell" is intended for advanced troubleshooting operations and while

supported in this release, is a deprecated interface, and may be removed in a

future version of the product.  For alternative commands, exit the "pi shell"

and run the "help" command.

The "pi shell" command launches a root bash shell.  Commands within the shell

are not audited, and improper use of this command can severely harm the

system.

Help us improve the product!  If your scenario requires "pi shell," please

submit a Service Request, or post your scenario to the

https://communities.vmware.com/community/vmtn/vcenter/vc forum and add

"appliance" tag.

 

vc02:~ # uname -n

vc02.go-lab.jp

vc02:~ # cat /etc/SuSE-release

SUSE Linux Enterprise Server 11 (x86_64)

VERSION = 11

PATCHLEVEL = 3

 

なお、この VCSA からは自宅ラボにあり、直接インターネットに出ることができます。

Datadog Agent インストール。

インストールは、DD_API_KEY を指定しつつ、curl コマンドで取得したスクリプトを実行します。
ちなみに このコマンドラインと DD_API_KEY は、Datadog の無料体験の登録時などにわかります。

vc02:~ # DD_API_KEY=182b0~ bash -c "$(curl -L https://raw.githubusercontent.com/DataDog/dd-agent/master/packaging/datadog-agent/source/install_agent.sh)"

 

スクリプトの内容は下記でわかります。

dd-agent/install_agent.sh at master · DataDog/dd-agent · GitHub

 

これで、Datadog Agent の RPM がインストールされました。

vc02:~ # rpm -qa | grep datadog

datadog-agent-5.13.2-1

 

デフォルトで Agent は起動します。

vc02:~ # chkconfig --list datadog-agent

datadog-agent             0:off  1:off  2:on   3:on   4:on   5:on   6:off

vc02:~ # service datadog-agent status

Datadog Agent (supervisor) is running all child processes

 

Datadog Agent 連携用ユーザの作成。

 

Datadog 連携用のユーザとして、今回は datadog-user@vsphere.local ユーザを

vCenter のもつ PSC のローカルユーザとして作成しています。

dd-01.png

 

作成したユーザには、vCenter の「読み取り専用」ロールを付与しています。

dd-02.png

 

Datadog Agent の連携設定。

Datadog Agent に含まれる /etc/dd-agent/conf.d/vsphere.yaml.example ファイルをもとに、
設定ファイルを作成します。

 

今回は下記のようにしました。

vc02:~ # vi /etc/dd-agent/conf.d/vsphere.yaml

vc02:~ # chown dd-agent:dd-agent /etc/dd-agent/conf.d/vsphere.yaml

vc02:~ # grep -E -v "#|^$" /etc/dd-agent/conf.d/vsphere.yaml

init_config:

instances:

  - name: vc02-home

    host: vc02.go-lab.jp

    username: 'datadog-user@vsphere.local'

    password: 'パスワード'

    ssl_verify: false

 

Datadog Agent を再起動します。

vc02:~ # service datadog-agent restart

* Stopping Datadog Agent (stopping supervisord) datadog-agent            [ OK ]

* Starting Datadog Agent (using supervisord) datadog-agent               [ OK ]

 

これで、vSphere の情報が Datadog に情報が転送されるようになります。
Datadog の Web UI (下記は Infrastructure → Infrastructure List を開いた画面)から、

VCSA と(赤枠のもの)、その vCenter が管理する ESXi と VM が自動検出されたことがわかります。
VM は、起動されているもののみ表示されるようです。

dd-03.png

 

以上、VCSA 6.0 を Datadog に登録してみる話でした。つづくかもしれない・・・


How to configure DRS

Script to copy App Volumes AppStacks to another datastore

$
0
0

Like most IT guys i like to reduce the amount of repeating work where possible.
At a customer we build a VDI/Workspace One staging environment where VDI/RDS templates are build, tested and updated. Also all VMware App Volumes AppStacks are created on this platform.
When the AppStacks are ready for production you want to be able to copy these to the Production storage for App Volumes. For this, there is no proper solution.. Either stack those AppStacks again or get them transported with the fling and Veeam. Both of these solutions are time consuming and try explaining this to the support staff.

To make life more easy, I wrote a PowerShell script to properly copy App Volumes AppStacks and it’s metadata from the source datastore to any (shared) datastore.
Yes this includes from VSAN! Where you needed to fling and Veeam before to get your AppStacks to your production environment.
Manually copying these files via SSH is an option but time consuming. Copying the AppStacks from a VSAN via the Datastore browser will even end you up with CrapStacks (Broken AppStacks because something with a .flat file and some VSAN headers that are missing)

The template script i wrote can be downloaded below and requires VMware PowerCLI.
Changelog:
#v1 Added metadata copy option (Download)
#v2 merged metadata copy with AppStack copy. (Download)

#1. Change vCenter Address and account. (can be multiple vCenter but untested)
#2. Change paths to your source and target datastores. (Source staging App Volumes Datastore and target Production Datastore)
#3. No Support, know what you are doing and change whatever you like to suit your needs.

Current options build in:

“Press ‘1’ To Connect to Staging vCenter.” (Connect to source vCenter)
“Press ‘2’ To List AppStacks Staging.” (List current AppStacks on source datastore)
“Press ‘3’ To List AppStacks Production.” (List current AppStacks on destination datastore)
“Press ‘4’ To Copy AppStacks to Production.” (Copy AppStacks and matching metadata to destination datastore)
“Press ‘5’ To Copy AppStack Metadata to Production.” (Copy Metadata to destination datastore)
“Press ‘6’ To Backup AppStacks.” (Copy all AppStacks to a backup datastore, be aware no metadata is copied along. working on this)
“Press ‘Q’ To Quit.”

After Copying your AppStacks and Metadata to the target datastore import them on your production App Volumes Manager.
Presto, Your AppStacks are now available on your Productions environment with the corresponding metadata! No legacy status 

Source: Script to copy App Volumes AppStacks to another datastore - vDrone

SRM 6.5 Resources

$
0
0

Start Planning Your Upgrade to vSAN 6.6 Today!

$
0
0

vSAN Logo.jpg

vSAN version 6.6, the 6th generation of the product was recently released and is the biggest update yet. This release includes 20+ new features and various performance enhancements. Please keep in mind that VMware does not support upgrading from vSphere 6.0 U3 to vSphere 6.5 as per vSphere upgrade policy. Please see VMware KB https://kb.vmware.com/kb/2149840.

All this means that VMware does not support “back in time” upgrades (vSphere 6.5.0 was released 5 months before vSphere 6.0 U3). Please review vSphere upgrade matrix http://partnerweb.vmware.com/comp_guide2/sim/interop_matrix.php#upgrade.

 

The table below describes the supported upgrade paths for VMware vSAN enabled hosts:

Source vSAN Version

Source vSphere
Release
Target vSAN
Version
Target vSphere
Release
Notes

5.5

vSphere ESXi 5.5 with patch
ESXi550-201504001
6.6vSphere 6.5 with patch ESXi650-201704001

ESXi 6.5.0d Build 5310538

6.0

vSphere 6.0 GA6.6vSphere 6.5 with Patch
ESXi650-201704001

ESXi 6.5.0d Build 5310538

6.1

vSphere 6.0 Update 1.6.6vSphere 6.5 with patch ESXi650-201704001

ESXi 6.5.0d Build 5310538

6.2

vSphere 6.0 Update 26.6vSphere 6.5 with patch
ESXi650-201704001

ESXi 6.5.0d Build 5310538

6.2

vSphere 6.0 Update 3n/aThis is NOT a supported upgrade path to vSAN 6.61

Please review VMware KB
https://kb.vmware.com/kb/2149840

6.5

vSphere 6.5 GA6.6vSphere 6.5 with patch ESXi650-201704001

ESXi 6.5.0d Build 5310538

Introducing StratusMark

$
0
0

One of the things that working in performance enables is the ability to play with a wide variety of environments, ranging from vSphere to public cloud providers.  Unsurprisingly, making comparisons across these dissimilar environments has historically been very difficult.  Some time back I started crafting a benchmark harness that would allow these kinds of comparisons to be more easily made.  After a bit of tinkering, a benchmark named StratusMark was introduced within VMware.  Since then we’ve used it for several internal case studies involving environment comparisons.  After talking with a few colleagues recently about how the benchmark might be more broadly used, I thought now might be a great time to provide some context around StratusMark, our usage of it, and some of the data we hope to collect in the future.

 

StratusMark is a Java-based application.  It is capable of provisioning a wide variety of workloads across a wide variety of environments. 

 

StratusMark Environment Options:

  • vSphere
  • vCloud Director
  • vRealize Automation
  • Amazon AWS
  • Microsoft Azure
  • Google Compute
  • IBM Softlayer
  • OVH (formerly vCloud Air)

 

It also has several modes of operation, allowing the deployment of various types of workloads.

 

StratusMark Operating Modes:

  • Lifecycle Measurements
  • Microbenchmarking
  • Macrobenchmarking

 

For lifecycle measurements, it captures the time between a resource request, through a steady state, and then to termination of a resource. Microbenchmarking generally utilizes existing templates (or images) already available to an environment.  This mode allows the utilization of smaller benchmarks to be transferred as part of an instance’s payload.  This payload is user-definable; it can be as simple as a “ls” command to verify an instance is truly ready for work to something as complex as SPECjbb. Macrobenchmarking leverages user-modified templates to introduce instance-to-instance dependencies.  For example, when running DVD Store 3 there are instance dependencies between the web tiers and the database.

 

As seen from the above environment and operating lists, StratusMark can deploy and compare a diverse set of workloads in interesting ways. Stay tuned for additional posts highlighting some of these comparisons.

Learn vROPS like Expert

VMware NSX と VCP-NV / VCIX-NV の勉強方法について。

$
0
0

VMware NSX と その認定資格試験の勉強方法についてのヒントを、

いくつか紹介してみようと思います。

 

まず、NSX の技術認定資格についてですが、vSphere(DCV)とは別に

NV(Network Virtualization)というトラックがあり、DCV トラックと同様に

VCA / VCP / VCIX(VCAP 試験に合格すると認定される)/ VCDX があります。

NSX とはいっても、NSX for vSphere(NSX-v)のみを対象とした試験です。

つまり現時点の NSX の認定資格には NSX for Multi-Hypervisor については、まったく含まれません。

 

具体的な資格の体系や出題範囲などについては

VMware Education のサイトを参照いただければと思います。

※たまに変更があったり、具体的な出題内容には触れられなかったりするため。

VMware NSX Training and Certification

 

NSX の試験の勉強については、正攻法でいくとすると、

可能な限り VMware の認定トレーニングコースを受講するのがよいと思います。

たとえば、下記のような・・・

VMware NSX: Install, Configure, Manage [V6.2]

 

ただ、費用や受講時間帯などの問題で受講が難しいケースがあると思うので、

※ちなみに、私も NSX のトレーニングコースは受講できてません。

 

実は、私が NSX の資格(VCP-NV / VCIX-NV)を取得したのは 2014年
VMware Certified Implementation Expert 6 – Network Virtualization - Acclaim)で

ちょっとノウハウが古いかもしれませんが、

このあと紹介するコンテンツは最新化されているようです。

 

1. 知識習得について

 

基礎知識

NSX の基礎知識の習得については、マニュアル(ドキュメント)を読むことが一番だと思います。

対象の NSX のバージョン(6.2、6.3 ・・・)を選択してからドキュメントを参照します。

VMware NSX for vSphere のドキュメント

 

NSX の基本的なマニュアルは、日本語で公開されています。

これまでの VMware 製品のドキュメントよりも、図やスクリーンショットが多い印象があります。

洋書や、VMware 公式の日本語技術書(2014年11月頃)も出版されていますが、

正直なところ個人的には、それらよりマニュアルのほうが分かりやすい気がします。

 

設計 / 操作手順

NSX の設計や、具体的な操作についての勉強については、

VMTN の VMware NSX  フォーラムにあるドキュメント(「文章」タブ)を活用するとよいと思います。

 

まずおすすめなのは下記の 2つです。

ただし、NSX フォーラムのドキュメントやディスカッションは英語のみです。

特に VCP-NV の勉強では、「VMware NSX for vSphere Network Virtualization Design Guide」が役立ちます。

VMware® NSX for vSphere Network Virtualization Design Guide ver 3.0

NSX-v Operations Guide, rev 1.5

 

 

2. 実機操作について

 

VMware Hands-on Labs を利用する

NSX の実機環境を用意するのは、なかなかハードルが高いと思います。

 

Hands-on Labs(HoL)では、手軽に実機によるラボを使用することができます。

VMware Hands-on Labs

このページは VMTN の一部ですが、HoL を利用する場合は

あらたに VMTN アカウントとは別のアカウントを作成することになります。

 

HoL は、ラボ マニュアルのシナリオをもとに手順を進めるようになっていて、

ひと通りの機能を、実機で試すことができます。

VCIX の実技試験の準備としても、ラボ マニュアルのシナリオが参考になります。

 

下記のページに日本語化された主要なラボがまとめられています。
NSX については、まずは HOL-1703 から始まる名前のラボに取り組むとよいと思います。

VMware Hands-on Labs

 

HoL は無償で、何回でも同じラボを利用することができます。

私が VCIX-NV を取得した時には、20回以上 HoL-1403 を実行しました。

また、HoL ではラボ マニュアルのシナリオに沿わない操作も可能であり、

工夫すれば NSX API を試したりすることもできます。

※当時の NSX の基本的なラボは HoL-1703 ではなく、HoL-1403 でした。

※ラボを繰り返し利用する場合は、環境は初期化されてしまいます。

 

ただし、基本的に HoL で試せるのは NSX 自体の機能のみで、

物理的なネットワークにかかわる機能などは試せません。

 

評価用ライセンスを利用する

以前は、検証環境や自宅ラボなどで、実際に NSX 環境を構築しようとしても、

NSX は(vSphere などとは異なり)一般向けに無償評価ライセンスは提供されていませんでした。

 

しかし最近、有償ですが VMUG Advantage というサービスで
NSX の評価ライセンスが利用できるようになりました。

https://www.vmug.com/Join//EVALExperience

※ただし私は vExpert 用の評価ライセンスを利用させてもらっているので、これは利用してません。

 

 

NSX は技術要素としてはまだ新しめのもので、手探りで技術習得をしている人が多いと思いますので

なにかしら役立てていただければと思います。

 

以上、 NSX の勉強方法についてでした。


NSX-v の簡易バックアップサーバ (SFTP) を Linux で用意する。

$
0
0

NSX for vSphere (NSX-v) では、NSX Manager からのデータバックアップ先として

FTP サーバ、もしくは SFTP サーバを指定できます。

Back Up NSX Manager Data

 

今回は、検証環境や自宅ラボなどで、

NSX Manager の簡易的なバックアップ先を用意する方法を紹介しようと思います。

 

Red Hat Enterprise Linux や、CentOS など、一般的な Linux ディストリビューションでは

SSH Server がデフォルトでセットアップされています。

その SSH Server に含まれる SFTP Server を利用すると便利ではないかと思います。

バックアップ先 として利用する Linux

今回は、Oracle Linux 7.3 を利用します。

[root@nsx-work ~]# cat /etc/oracle-release

Oracle Linux Server release 7.3

 

OS の Minimal Install でも、デフォルトで openssh-server がインストールされていて、

サービスが起動されています。

[root@nsx-work ~]# rpm -q openssh-server

openssh-server-6.6.1p1-35.el7_3.x86_64

[root@nsx-work ~]# systemctl is-enabled sshd

enabled

[root@nsx-work ~]# systemctl is-active sshd

active

 

そして sftp-server も含まれていて、

デフォルトで OS ユーザで接続することが可能になっています。

[root@nsx-work ~]# rpm -ql openssh-server | grep sftp

/usr/libexec/openssh/sftp-server

/usr/share/man/man8/sftp-server.8.gz

 

バックアップ先 SFTP Server での準備

バックアップ先の Linux は、OS ユーザを作成するだけで SFTP サーバとして利用できます。

 

それでは、バックアップ先の Linux で、SFTP 接続で利用する OS ユーザを作成しておきます。

今回は「nsx-bk-user01」という名前のユーザを作成します。

[root@nsx-work ~]# useradd nsx-bk-user01

[root@nsx-work ~]# passwd nsx-bk-user01

ユーザー nsx-bk-user01 のパスワードを変更。

新しいパスワード:  ★パスワードを入力。

新しいパスワードを再入力してください:

passwd: すべての認証トークンが正しく更新できました。

 

今回は、作成したユーザのホームディレクトリ「/home/nsx-bk-user01」をバックアップ先に指定するつもりです。

[root@nsx-work ~]# su - nsx-bk-user01

[nsx-bk-user01@nsx-work ~]$ echo $HOME

/home/nsx-bk-user01

[nsx-bk-user01@nsx-work ~]$ pwd

/home/nsx-bk-user01

 

NSX Manager でのバックアップ先指定

NSX Manager の 管理画面にアクセスして、バックアップ先を指定します。

ちなみに今回の NSX Manager は「nsxmgr01」という名前にしています。

 

NSX Manager の管理画面に admin ユーザでログインします。

nsx-mgr-bk-00.png

 

「Backup & Restore」画面の「FTP Server Settings」で「Change」ボタンをクリックして設定します。

nsx-mgr-bk-01.png

 

バックアップ先の設定をします。

  • IP/Host name → SFTP サーバのアドレスを指定。
  • Transfer Protocol → SFTP を指定。
  • Port → SFTP (SSH) なので 22。
  • User name / Password → 作成した nsx-bk-user01 ユーザを指定。
  • Backup Directory → バックアップ先ディレクトリを指定。
  • Backup Prefix → バックアップファイルの先頭文字列を指定。今回は「nsxmgr01-」
  • Pass Phrase → パスフレーズを指定。

nsx-mgr-bk-02.png

 

バックアップの実行

バックアップは Scheduling(定期実行)を設定できますが、今回は手動実行します。

「Bakcup」をクリックします。

nsx-mgr-bk-04.png

 

「Start」をクリックすると、バックアップが実行されます。

nsx-mgr-bk-05.png

 

バックアップ ファイルが表示されます。

nsx-mgr-bk-06.png

 

SFTP サーバでディレクトリを見ると、NSX Manager のバックアップファイルが作成されています。

2つファイルが作成されていますが、セットで 1回分のバックアップファイルです。

[nsx-bk-user01@nsx-work ~]$ pwd

/home/nsx-bk-user01

[nsx-bk-user01@nsx-work ~]$ ls -lh

合計 568K

-rw-rw-r--. 1 nsx-bk-user01 nsx-bk-user01 561K  6月 12 23:06 nsxmgr01-23_06_25_Mon12Jun2017

-rw-rw-r--. 1 nsx-bk-user01 nsx-bk-user01  230  6月 12 23:06 nsxmgr01-23_06_25_Mon12Jun2017.backupproperties

 

NSX Manager が壊れることもありうるので、

簡易的にでもバックアップを取得しておくと便利かなと思います。

 

以上、Linux SFTP サーバを NSX Manager のバックアップ先にしてみる話でした。

Docker コンテナの PowerNSX を実行してみる。

$
0
0

NSX for vSphere を PowerCLI で操作する、PowerNSX  というツールが公開されています。

実は、PowerNSX は VMware の Docker Hub で公開されている

PowerCLI Core のコンテナイメージ「vmware/powerclicore」にも含まれています。

ということで、Docker コンテナから PowerNSX を起動してみます。

 

以前の投稿もどうぞ・・・

PowerCLI Core を Docker コンテナでためしてみる。

PowerNSX で VMware NSX の論理スイッチ (Logical Switch) を作成してみる。

 

 

Docker ホストの用意

今回の Docker ホストは、Photon OS です。

あらかじめ、下記からダウンロードした OVA ファイル(OVA with virtual hardware v11)をデプロイして、

ホスト名と root パスワードだけ変更してあります。

Downloading Photon OS · vmware/photon Wiki · GitHub

 

まずパッケージを最新化して、OS を再起動しておきます。

※Photon OS は、yum のかわりに、tdnf コマンドを使用します。

root@photon01 [ ~ ]# tdnf upgrade -y

root@photon01 [ ~ ]# reboot

 

Photon OS のバージョンです。

root@photon01 [ ~ ]# cat /etc/photon-release

VMware Photon Linux 1.0

PHOTON_BUILD_NUMBER=62c543d

root@photon01 [ ~ ]# uname -r

4.4.70-3.ph1-esx

 

Docker のバージョンです。

root@photon01 [ ~ ]# rpm -q docker

docker-1.13.1-3.ph1.x86_64

 

そして Docker サービスを起動しておきます。

root@photon01 [ ~ ]# systemctl enable docker

Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

root@photon01 [ ~ ]# systemctl start docker

root@photon01 [ ~ ]# docker version

Client:

Version:      1.13.1

API version:  1.26

Go version:   go1.8.1

Git commit:   092cba3

Built:        Fri May  5 02:08:33 2017

OS/Arch:      linux/amd64

 

Server:

Version:      1.13.1

API version:  1.26 (minimum version 1.12)

Go version:   go1.8.1

Git commit:   092cba3

Built:        Fri May  5 02:08:33 2017

OS/Arch:      linux/amd64

Experimental: false

 

PowerCLI Core コンテナの起動

Docker Hub から、vmware/powerclicore をダウンロード(pull)しておきます。

root@photon01 [ ~ ]# docker pull vmware/powerclicore

Using default tag: latest

latest: Pulling from vmware/powerclicore

93b3dcee11d6: Pull complete

64180fb7dedf: Pull complete

46c9ea8ba821: Pull complete

b0ad35240277: Pull complete

f537a588698e: Pull complete

b821ac08cbe0: Pull complete

a76c30f73a8e: Pull complete

e5d8130503e2: Pull complete

a72ad7270123: Pull complete

c6b89e0875bf: Pull complete

d1628dac3e00: Pull complete

57fb698e34cd: Pull complete

9a9d3505a642: Pull complete

bf20548eaf12: Pull complete

a27e923ed27a: Pull complete

f0ecdd77fe48: Pull complete

7b8113d29296: Pull complete

2590b0e2e842: Pull complete

e3b9ecfe2ca0: Pull complete

d4838036c9df: Pull complete

5a536d9f1f30: Pull complete

3f9566a85b2e: Pull complete

bdb2ac6e70be: Pull complete

Digest: sha256:ffe996f7d664b2d8d9cd25501a8cb0a2f7459871b09523c1d3545df780ace211

Status: Downloaded newer image for vmware/powerclicore:latest

 

イメージがダウンロードできました。

root@photon01 [ ~ ]# docker images

REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE

vmware/powerclicore   latest              a8e3349371c5        6 weeks ago         610 MB

 

コンテナを起動します。

root@photon01 [ ~ ]# docker run -it vmware/powerclicore

 

コンテナを起動すると、このような感じになります。

docker-powernsx-01.png

 

PowerCLI Core のバージョンです。

PS /powershell> Get-PowerCLIVersion

 

PowerCLI Version

----------------

   VMware PowerCLI Core 1.0 build 5327113

---------------

Component Versions

---------------

   VMware vSphere PowerCLI Component 1.22 build 5327113

   VMware VDS PowerCLI Component 5.1 build 5327113

   VMware VDS PowerCLI Component 1.21 build 5327113

 

 

 

PS /powershell>

 

PowerNSX での vCenter と NSX への接続

それでは、PowerNSX で vCenter / NSX に接続してみます。

 

PowerNSX のバージョンを見ておきます。

PS /powershell> Import-Module -Name PowerNSX

PS /powershell> Get-Module PowerNSX | select Name,Version

 

 

Name     Version

----     -------

PowerNSX 2.1.0

 

 

ちなみに、まだ安定版というわけではないこともあり

Import-Module -Name PowerNSX を実行したタイミングで赤字で注意書きが表示されます。

docker-powernsx-02.png

 

PowerNSX では NSX Manager と vCenter に接続してから実行します。

Connect-NSXServer では、vCenter に同時接続する機能がありますが、

いまのところちょっとうまくいかないので、vCenter と NSX Manager をそれぞれ別に接続します。

 

まず、vCenter に接続します。「vc-sv02」が今回の vCenter です。

PS /powershell> Connect-VIServer vc-sv02.go-lab.jp -Force

 

Specify Credential

Please specify server credential

User: gowatana ★VCにログインするユーザを入力。

Password for user gowatana: ******** ★パスワードを入力。

 

vCenter に接続できました。

PS /powershell> $Global:DefaultVIServer | fl Name,IsConnected

 

 

Name        : vc-sv02.go-lab.jp

IsConnected : True

 

 

NSX Manager に接続します。「nsxmgr01」が今回の NSX Manager です。

「Using existing PowerCLI connection to ~」にある IP アドレスは、
すでに接続済みのvCenter の IP アドレスです。

そして VIConnection を見ると、vCenter への接続を認識していることが分かります。

PS /powershell> Connect-NsxServer -NsxServer nsxmgr01.go-lab.jp -ValidateCertificate:$false

 

 

Windows PowerShell credential request

NSX Manager Local or Enterprise Admin SSO Credentials

User: admin★NSX Manager のユーザを入力。

Password for user admin: **********★パスワードを入力。

 

Using existing PowerCLI connection to 192.168.1.96

 

 

Version             : 6.3.1

BuildNumber         : 5124716

Credential          : System.Management.Automation.PSCredential

Server              : nsxmgr01.go-lab.jp

Port                : 443

Protocol            : https

ValidateCertificate : False

VIConnection        : vc-sv02.go-lab.jp

DebugLogging        : False

DebugLogfile        : \PowerNSXLog-admin@nsxmgr01.go-lab.jp-2017_06_13_14_41_28.log

 

 

 

PS /powershell>

 

PowerNSX に含まれるコマンドレットで、情報取得できるようになります。

PS /powershell> Get-NsxManagerSystemSummary

 

ipv4Address       : 192.168.1.141

dnsName           : nsxmgr01.go-lab.jp

hostName          : nsxmgr01

domainName        : go-lab.jp

applianceName     : vShield Virtual Appliance Management

versionInfo       : versionInfo

uptime            : 2 days, 3 hours, 14 minutes

cpuInfoDto        : cpuInfoDto

memInfoDto        : memInfoDto

storageInfoDto    : storageInfoDto

currentSystemDate : Tuesday, 13 June 2017 11:41:50 PM JST

 

 

これで、便利な PowerNSX を手軽にためすことができます。

ただ PowerCLI Core も、まだ Technical Preview なので

まだメインで使う PowerNSX は Windows の PowerCLI とかなと思います。

 

以上、PowerCLI Core コンテナ同梱の PowerNSX を実行してみる話でした。

vRealize Automation 7.3: Podcast, CloudCred Tasks, & Community Resources

$
0
0

vRA copy.png

Catch Wednesday's VMware Community Podcast

What's new with vRealize Automation 7.3

& these recent resources at CloudCredibility.com to bring you up to speed on this latest product release.

 

Task 4294: vRealize Automation Family API Guide

Task 4295: vRA REST API Samples

Task 4296: vRealize Automation API Samples for Postman

Task 4297: vRealize Automation 7.3 Release Notes

Task 4298: vRealize Automation 7.3 REST API Documentation

New Task 4355: VMware Community Podcast: What's new with vRA 7.3

Host Eric Nielsen and guest Jad El-Zein, Principal Architect CMBU VMware (@virtualjad), will discuss what's new with vRA 7.3

 

+++

CommunityBuilding.png

Plus, don't miss what's Around the Cloud - DevOps, VMTN, Events

All new today & this week! 

 

New Task 4352: VMware {code} features Cody De Arkland

New Task 4353: June 15: Microservices for Enterprises Meetup & FB Livestream

New Task 4354: Congrats Community Warrior: @bayupw

VMware Blog Posts

$
0
0

For anyone who's interested, I've put together a list of blog posts submitted to Altaro's VMware Blog which I grouped, roughly, by category. The links in red will be published in the near future.

 

 

Upgrading

Installing

Monitoring

Management

Storage

Networking

Availability

Security

Features

Informational

PowerCLI

vSphere API

On now! vForum Online 2017 & CloudCred Competition!

$
0
0

Be a part of the Largest Virtual IT Conference

VMware vForum Online 2017 - June 28

Transformation in Action

Plus, be sure to play & win at CloudCredibility.com, as we partner once again for the

vForum Online 2017 & CloudCred Competition!

vForum 2017.png

 

Starting Thursday, June 14, get into the game with pre-event tasks in the vForum Online 2016 CloudCred Badge.

vForumOnline2017LG.png

 

Upon the conclusion of the vForum Online, successful badge winners will be entered into drawings for these cool prizes:

A Raspberry Pi 3 Ultimate Starter Kit

A Bose Bluetooth Headphones

A Samsung Gear 360

 

So don't delay. Begin today!

 

You must complete the badge to be entered in the prize drawings.

 

SamsungGearNOback.pngBoseEarphones2x2.pngRaspberryPiNOBack.png

 

Play, Learn, & WIN.

See you at CloudCredibility.com

June 2017 Microsoft Windows Server, Azure, Nano and life cycle Updates

$
0
0

server storage I/O trends

Microsoft Windows Server, Azure, Nano and life cycle Updates

Spoiler alert, Microsoft is refocusing Nano for containers only, no more Bare Metal...

 

For those of you who have an interest in Microsoft Windows  Server on-premise, on Azure, on Hyper-V or Nano life cycle here are some recently announced updates.
Microsoft Windows Server Nano Lifecycle

Microsoft has announced updates to Windows Server Core and Nano along with semi-annual channel updates (read more here). The synopsis of this new update via Microsoft (read more here) is:

In this new model, Windows Server releases are identified by  the year and month of release: for example, in 2017, a release in the 9th month  (September) would be identified as version 1709. Windows Server will release  semi-annually in fall and spring. Another release in March 2018 would be  version 1803. The support lifecycle for each release is 18 months.

Microsoft has announced that its lightweight variant of WIndows  Server 2016 (if you need a refresh on server requirements visit here) known as nano will now be focused for WIndows based containers as  opposed to being for bare metal. As part of this change, Microsoft has reiterated that Server Core the headless (aka non-desktop user interface)  version of WIndows Server 2016 will continue as the platform for BM along with other deployments where a GUI interface is not needed. Note that one of  the original premises of Nano was that it could be leveraged as a replacement  for Server Core.

 

As part of this shift, Microsoft has also stated their intention to further streamline the already slimmed down version of WIndows  Server known as Nano by reducing its size another 50%. Keep in mind that Nano is already a fraction of the footprint size of regular Windows Server (Core or  Desktop UI). The footprint of Nano includes both its capacity size on disk (HDD  or SSD), as well as its memory requirements, speed of startup boot, along with  number of components that cut the number of updates.

 

By focusing Nano for container use (e.g. Windows containers)  Microsoft is providing multiple micro services engines (e.g. Linux and  Windows) along with various management including Docker. Similar to providing  multiple container engines (e.g. Linux and Windows) Microsoft is also  supporting management from Windows along with Unix.

 

Does This Confirm Rumor FUD that Nano is Dead

IMHO the answer to the FUD rumors that are circulating around that NANO is dead are false.

 

Granted Nano is being refocused by Microsoft for containers and will not be the lightweight headless Windows Server 2016 replacement for Server Core. Instead, the Microsoft focus is two path with continued enhancements on Server Core for headless full Windows Server 2016 deployment, while Nano gets further streamlined for containers. This means that Nano is no longer bare metal or Hyper-V focused with Microsoft indicating that Server Core should be used for those types of deployments.

 

What is clear (besides no bare metal) is that Microsoft is working to slim down Nano even further by removing bare metal items, Powershell,.Net and other items instead of making those into optional items. The goal of Microsoft is to make the base Nano image on disk (or via pull) as small as possible with the initial goal of being 50% of its current uncompressed 1GB disk size. What this means is that if you need Powershell, you add it as a layer, need .Net then add as a layer instead of having the overhead of those items if you do not need tem. It will be interesting to see how much Microsoft is able to remove as standard components and make them options that you can simply add as layers if needed.

 

What About Azure and Bring Your Own License

In case you were not aware or had forgotten when you use Microsoft Azure and deploy virtual machine (aka cloud instances),  you have the option of bringing (e.g. using) your own WIndows Server licenses.  What this means is that by using your own Windows Server licenses you can cut the monthly cost of your Azure VMs. Check out the Azure site and explore  various configuration options to learn more about pricing and various virtual machine instances from Windows to Linux here as well as hybrid deployments.

 

Where To Learn More

What This All Means

Microsoft has refocused Windows Server 2016 Core and Desktop as its primary bare metal including for virtual as well as Azure OS platforms, while Nano is now focused on being optimized for Windows-based containers including Docker among other container orchestration.

 

Ok, nuff said (for now...).

 

Cheers
Gs

AWS S3 Storage Gateway Revisited (StorageIOLab review Part I)

$
0
0

server storage I/O trends

AWS S3 Storage Gateway Revisited (Part I)

This Amazon Web Service (AWS) Storage Gateway Revisited posts is a follow-up to the  AWS Storage Gateway test drive and review I did a few years ago (thus why it's called revisited). As part of a two-part series, the first post looks at what AWS Storage Gateway is, how it has improved since my last review of AWS Storage Gateway  along with deployment options. The second post in the series looks at a sample test drive deployment and use.

 

If you need an AWS primer and overview of various services such as Elastic Cloud Compute (EC2), Elastic Block Storage (EBS), Elastic File Service (EFS), Simple Storage Service (S3), Availability Zones (AZ), Regions and other items check this multi-part series (Cloud conversations: AWS EBS, Glacier and S3 overview (Part I) ).

 

AWS

 

As a quick refresher, S3 is the AWS bulk, high-capacity unstructured and object storage service along with its companion deep cold (e.g. inactive) Glacier. There are various S3 storage service classes including standard, reduced redundancy storage (RRS) along with infrequent access (IA) that have different availability durability, performance, service level and cost attributes.

 

Note that S3 IA is not Glacier as your data always remains on-line accessible while Glacier data can be off-line. AWS S3 can be accessed via its API, as well as via HTTP rest calls, AWS tools along with those from third-party's. Third party tools include NAS file access such as S3FS for Linux that  I use for my Ubuntu systems to mount S3 buckets and use similar to other mount points. Other tools include Cloudberry, S3 Motion, S3 Browser as well as plug-ins available in most data protection (backup, snapshot, archive) software tools and storage systems today.

 

AWS S3 Storage Gateway and What's New

The Storage Gateway is the AWS tool that you can use for accessing S3 buckets and objects via your block volume, NAS file or tape based applications. The Storage Gateway is intended to give S3 bucket and object access to on-premise applications and data infrastructures functions including data protection (backup/restore,  business continuance (BC), business resiliency (BR), disaster recovery (DR)  and archiving), along with storage tiering to cloud.

 

Some of the things that have evolved with the S3 Storage Gateway include:

  • Easier, streamlined download, installation, deployment
  • Enhanced Virtual Tape Library (VTL) and Virtual Tape support
  • File serving and sharing (not to be confused with Elastic File Services (EFS))
  • Ability to define your own bucket and associated parameters
  • Bucket options including Infrequent Access (IA) or standard
  • Options for AWS EC2 hosted, or on-premise VMware as well as Hyper-V  gateways (file only supports VMware and EC2)

AWS Storage Gateway Three Functions

AWS Storage Gateway can be  deployed for three basic functions:

 

AWS Storage Gateway File Architecture
AWS Storage Gateway File Architecture via AWS.com

    • File Gateway (NFS NAS) - Files, folders, objects and other  items are stored in AWS S3 with a local cache for low latency access to most  recently used data. With this option, you can create folders and subdirectory similar to a regular file system or NAS device as well as configure various security, permissions, access control policies. Data is stored in S3 buckets that you specify policies such as standard or Infrequent Access (IA) among other options. AWS hosted via EC2 as well as VMware Virtual Machine (VM) for on-premise file gateway.


      Also, note that AWS cautions on multiple concurrent writers to S3 buckets with Storage Gateway so check the AWS FAQs which may have changed by the time you read this. Current file share limits (subject to change) include 1 file gateway share per S3 bucket (e.g. a one to one mapping between file share and a bucket). There can be 10 file shares per gateway (e.g. multiple shares each with its own bucket per gateway) and a maximum file size of 5TB (same as maximum S3 object size). Note that you might hear about object storage systems supporting unlimited size objects which some may do, however generally there are some constraints either on their API front-end, or what is currently tested. View current AWS Storage Gateway resource and specification limits here.

AWS Storage Non-Cached e.g. Stored Volume Gateway Architecture
AWS Storage Gateway Non-Cached Volume Architecture via AWS.com

AWS Storage Gateway cached volume Architecture
  AWS Storage Gateway Cached Volume Architecture via AWS.com

    • Volume Gateway (Block iSCSI) - Leverages S3 with a point in  time backup as an AWS EBS snapshot. Two options exist including Cached volumes  with low-latency access to most recently used data (e.g. data is stored in AWS,  with a local cache copy on disk or SSD). The other option is Stored Volumes (e.g. non-cached) where primary  copy is local and periodic snapshot backups are sent to AWS. AWS provides EC2 hosted, as well as VMs for VMware and various Hyper-V Windows Server based VMs.


      Current Storage Gateway volume limits (subject to change) include maximum  size of a cached volume 32TB, maximum size of a stored volume 16TB. Note that snapshots of cached volumes larger than 16TB can only  be restored to a storage gateway volume, they can not be restored as an EBS  volume (via EC2). There are a maximum of 32 volumes for a gateway with total size of all volumes for a gateway (cached) of 1,024TB (e.g. 1PB). The total  size of all volumes for a gateway (stored volume)  is 512TB. View current AWS Storage Gateway resource and specification limits here.

AWS Storage Gateway VTL Architecture
  AWS Storage Gateway VTL Architecture via AWS.com

  • Virtual Tape Library Gateway (VTL) - Supports saving your data for  backup/BC/DR/archiving into S3 and Glacier storage tiers. Being a Virtual Tape Library (e.g. VTL) you can  specify emulation of tapes for compatibility with your existing backup,  archiving and data protection software, management tools and processes.


    Storage Gateway limits for tape include minimum size of a virtual tape 100GB, maximum size of a virtual tape 2.5TB, maximum number of virtual tapes for a VTL is 1,500 and total size  of all tapes in a VTL is 1PB. Note that the maximum number of virtual tapes in an archive is unlimited and total size of all tapes in an archive is also unlimited. View current AWS Storage Gateway resource and specification limits here.

        AWS

Where To Learn More

What This All Means

As to which gateway function and mode (cached or non-cached for Volumes) depends on what it is that you are trying to do. Likewise choosing between EC2 (cloud hosted) or on-premise Hyper-V and VMware VMs depends on what your data infrastructure support requirements are. Overall I like the progress that AWS has put into evolving the Storage Gateway, granted it might not be applicable for all usage cases. Continue reading more and view images from the AWS Storage Gateway Revisited test drive in part two located here.

 

Ok, nuff said (for now...).

 

Cheers
Gs


Part II Revisting AWS S3 Storage Gateway (StorageIOlab Test Drive Deployment)

$
0
0

server storage I/O trends

Part II Revisiting AWS S3 Storage Gateway (Test Drive Deployment)

This Amazon Web Service (AWS) Storage Gateway Revisited posts is a follow-up to the  AWS Storage Gateway test drive and review I did a few years ago (thus why it's called revisited). As part of a two-part series, the first post looks at what AWS Storage Gateway is, how it has improved since my last review of AWS Storage Gateway  along with deployment options. The second post in the series looks at a sample test drive deployment and use.

 

What About Storage Gateway Costs?

Costs vary by region, type of storage being used (files stored in S3, Volume Storage, EBS Snapshots, Virtual Tape storage, Virtual Tape storage archive), as well as type of gateway host, along with how access and used. Request pricing varies including data written to AWS storage by gateway (up to maximum of $125.00 per month), snapshot/volume delete, virtual tape delete, (prorate fee for deletes within 90 days of being archived), virtual tape archival, virtual tape retrieval. Note that there are also various data transfer fees that also vary by region and gateway host. Learn more about pricing here.

 

What Are Some Storage Gateway Alternatives

AWS and S3 storage gateway access alternatives include those from various third-party (including that are in the AWS marketplace), as well as via data protection tools (e.g. backup/restore, archive, snapshot, replication) and more commonly storage systems. Some tools include Cloudberry, S3FS, S3 motion, S3 Browser among many others.

 

Tip is when a vendor says they support S3, ask them if that is for their back-end (e.g. they can access and store data in S3), or front-end (e.g. they can be accessed by applications that speak S3 API). Also explore what format the application, tool or storage system stores data in AWS storage, for example, are files mapped one to one to S3 objects along with corresponding directory hierarchy, or are they stored in a save set or other entity.

 

AWS Storage Gateway Deployment and Management Tips

Once you have created your AWS account (if you did not already have one) and logging into the AWS console (note the link defaults to US East 1 Region), go to the AWS Services Dashboard and select Storage Gateway (or click here which goes to US East 1). You will be presented with three options (File, Volume or VTL) modes.

 

What Does Storage Gateway and Install Look Like

The following is what installing a AWS Storage Gateway for file and then volume looks like. First, access the AWS Storage Gateway main landing page (it might change by time you read this) to get started. Scroll down and click on the Get Started with AWS Storage Gateway button or click here.

 

AWS Storage Gateway Landing Page

Select type of gateway to create, in the following example File is chosen.

Select type of AWS storage gateway

 

Next select the type of file gateway host (EC2 cloud hosted, or on-premise VMware). If you choose VMware, an OVA will be downloaded (follow the onscreen instructions) that you deploy on your ESXi system or with vCenter. Note that there is a different VMware VM gateway OAV for File Gateway and another for Volume Gateway. In the following example VMware ESXi OVA is selected and downloaded, then accessed via VMware tools such as vSphere Web Client for deployment.

 

AWS Storage Gateway select download

 

Once your VMware OVA file is downloaded from AWS, install using your preferred VMware tool, in this case I used the vSphere Web Client.

 

AWS Storage Gateway VM deploy

 

Once you have deployed the VMware VM for File Storage Gateway, it is time to connect to the gateway using the IP address assigned (static or DHCP) for the VM. Note that you may need to allocate some extra VMware storage to the VM if prompted (this mainly applies to Volume Gateway). Also follow directions about setting NTP time, using paravirtual adapters, thick vs. thin provisioning along with IP settings. Also double-check to make sure your VM and host are set for high-performance power setting. Note that the default username is sguser and password is sgpassword for the gateway.

 

AWS Storage Gateway Connect

 

Once you successfully connect to the gateway, next step will be to configure file share settings.

 

AWS Storage Gateway Configure File Share

 

 

Configure file share by selecting which gateway to use (in case you have more than one), name of an S3 bucket name to create, type of storage (S3 Standard or IA), along with Access Management security controls.

 

AWS Storage Gateway Create Share

 

 

Next step is to complete file share creation, not the commands provided for Linux and Windows for accessing the file share.

 

AWS Storage Gateway Review Share Settings

 

 

Review file share settings

 

AWS Storage Gateway access from Windows

Now lets use the file share by accessing and mounting to a Windows system, then copy some files to the file share.

AWS Storage Gateway verify Bucket Items

 

Now let's go to the AWS console (or in our example use S3 Browser or your favorite tool) and look at the S3 bucket for the file share and see what is there. Note that each file is an object, and the objects simply appear as a file. If there were sub-directory those would also exist. Note that there are other buckets that I have masked out as we are only interested in the one named awsgwydemo that is configured using S3 Standard storage.

 

AWS Storage Gateway Volume

 

Now lets look at using the S3 Storage Gateway for Volumes. Similar to deploying for File Gateway, start out at the AWS Storage Gateway page and select Volume Gateway, then select what type of host (EC2 cloud, VMware or Hyper-V (2008 R2 or 2012) for on-premise deployment). Lets use the VMware Gateway, however as mentioned above, this is a different OVA/OVF than the File Gateway.

 

AWS Storage Gateway Configure Volume

 

Download the VMware OVA/OVF from AWS, and then install using your preferred VMware tools making sure to configure the gateway per instructions. Note that the Volume Gateway needs a couple of storage devices allocated to it. This means you will need to make sure that a SCSI adapter exists (or add one) to the VM, along with the disks (HDD or SSD) for local storage. Refer to AWS documentation about how to size, for my deployment I added a couple of small 80GB drives (you can choose to put on HDD or SSD including NVMe). Note that when connecting to the gateway if you get an error similar to below, make sure that you are in fact using the Volume Gateway and not mistakenly using the File Gateway OVA (VM). Note that the default username is sguser and password is sgpassword for the gateway.

 

AWS Storage Gateway Connect To Volume

 

Now connect to the local Volume Storage Gateway and notice the two local disks allocated to it.

 

AWS Storage Gateway Cached Volume Deploy

 

Next its time to create the Gateway which are deploying a Volume Cached below.

 

AWS Storage Gateway Volume Create

 

Next up is creating a volume, along with its security and access information.

 

AWS Storage Gateway Volume Settings

 

Volume configuration continued.

 

AWS Storage Gateway Volume CHAP

 

And now some additional configuration of the volume including iSCSI CHAP security.

 

AWS Storage Gateway Windows Access

 

Which leads us up to some Windows related volume access and configuration.

 

AWS Storage Gateway Using iSCSI Volume

 

Now lets use the new iSCSI based AWS Storage Gateway Volume. On the left you can see various WIndows command line activity, along with corresponding configuration information on the right.

 

AWS Storage Gateway Being Used by Windows

 

And there you have it, a quick tour of AWS Storage Gateway, granted there are more options that you can try yourself.

 

AWS

Where To Learn More

What This All Means

Overall I like the improvements that AWS has made to the Storage Gateway along with the different options it provides. Something to keep in mind is that if you are planning to use the AWS Storage Gateway File serving sharing mode that there are caveats to multiple concurrent writers to the same bucket. I would not be surprised if some other gateway or software based tool vendors tried to throw some fud towards the Storage Gateway, however ask them then how they coordinate multiple concurrent updates to a bucket while preserving data integrity.

 

Which Storage Gateway variant from AWS to use (e.g. File, Volume, VTL) depends on what your needs are, same with where the gateway is placed (Cloud hosted or on-premise with VMware or Hyper-V). Keep an eye on your costs, and more than just the storage space capacity. This means pay attention to your access and requests fees, as well as different service levels, along with data transfer fees.

 

You might wonder what about EFS and why you would want to use AWS Storage Gateway? Good question, at the time of this post EFS has evolved from being internal (e.g. within AWS and across regions) to having an external facing end-point however there is a catch. That catch which might have changed by time you read this is that the end-point can only be accessed from AWS Direct Connect locations.

 

This means that if your servers are not in a AWS Direct Connect location, without some creative configuration, EFS is not an option. Thus Storage Gateway File mode might be an option in place of EFS as well as using AWS storage access tools from others. For example I have some of my S3 buckets mounted on Linux systems using S3FS for doing rsync or other operations from local to cloud. In addition to S3FS, I also have various backup tools that place data into S3 buckets for backup, BC and DR as well as archiving.

 

Check out AWS Storage Gateway yourself and see what it can do or if it is a fit for your environment.

 

Ok, nuff said (for now...).

 

Cheers
Gs

May 2017 Server StorageIO Data Infrastructures Update Newsletter

$
0
0

<td  style="padding:3px; font-family:Arial,Helvetica,sans-serif;" bgcolor="#00AAFF">Server StorageIO Industry Resources and Links

Volume 17, Issue V

Hello and welcome to the May 2017 issue of the Server StorageIO update newsletter.

 

Summer officially here in the northern hemisphere is still a few weeks away, however for all practical purposes it has arrived. What this means is that in addition to normal workplace activities and projects, there are plenty of outdoor things (as well as distractions) to attend to.

 

Over the past several months I have mentioned a new book that is due out this summer and which means it's getting close to announcement time. The new book title is Software Defined Data Infrastructure Essentials - Cloud, Converged, and Virtual Fundamental Server Storage I/O Tradecraft (CRC PRess/Taylor Francis/Auerbach) that you can learn more about here (with more details being added soon). A common question is will there be electronic versions of the book and the answer is yes (more on this in future newsletter).

Data Infrastructures

 

Another common question is what is it about, what is a data infrastructure (see this post) and what is tradecraft (see this post). Software-Defined Data Infrastructures Essentials provides  fundamental coverage of physical, cloud, converged, and virtual server storage  I/O networking technologies, trends, tools, techniques, and tradecraft skills.

 

Software-Defined  Data Infrastructures Essentials provides fundamental  coverage of physical, cloud, converged, and virtual server storage I/O  networking technologies, trends, tools, techniques, and tradecraft skills. From  webscale, software-defined, containers, database, key-value store, cloud, and  enterprise to small or medium-size business, the  book is filled with techniques,  and tips to help develop or refine your server storage I/O hardware, software,  and services skills. Whether you are new to data infrastructures or a seasoned  pro, you will find this comprehensive reference indispensable for gaining as  well as expanding experience with technologies, tools, techniques, and trends.

 

Software-Defined Data Infrastructure Essentials SDDI SDDC
ISBN-13: 978-1498738156
  ISBN-10: 149873815X
  Hardcover: 672 pages
  Publisher: Auerbach Publications; 1 edition (June 2017)
  Language: English

 

Watch for more news and insight about my new book Software-Defined Data Infrastructure Essentials soon. In the meantime,  check out the various items below in this edition of the Server StorageIO Update.

In This Issue

Enjoy this edition of the Server StorageIO update newsletter.

Cheers GS

Data Infrastructure and IT Industry Activity Trends

Some recent Industry Activities, Trends, News and Announcements include:

 

Flackbox.com has some new independent (non NetApp produced) learning resources including NetApp simulator eBook and MetroCluster tutorial. Over in the Microsoft world, Thomas Maurer has a good piece about Windows Server build 2017 and all about containers. Microsoft also announced SQL Server 2017 CTP 2.1 is now available. Meanwhile here are some my experiences and thoughts from test driving Microsoft Azure Stack.
   
Speaking of NetApp among other announcements they released a new version of their StorageGrid object storage software. NVMe activity in the industry (and at customer sites) continues to increase with Cavium Qlogic NVMe over Fabric news, along with Broadcom recent NVMe RAID announcements. Keep in mind that if the answer is NVMe, than what are the questions.

 

Here is a good summary of the recent OpenStack Boston Summit. Storpool did a momentum announcement which for those of you into software defined storage, add Storpool to your watch list. On the VMware front, check out this vSAN 6.6 demo (video) of stretched cluster via Yellow Bricks.

 

Check out other industry news, comments, trends perspectives here.

 

Server StorageIOblog Posts

Recent and popular Server StorageIOblog posts include:

View other recent as well as past StorageIOblog posts here

Server StorageIO Commentary in the news

Recent Server StorageIO industry trends perspectives commentary in the news.

Via EnterpriseStorageForum: What to Do with Legacy Assets in a Flash Storage World
  There is still a place for hybrid arrays. A hybrid array is the home run when it comes to leveraging your existing non-flash, non-SSD based assets today.
 
  Via EnterpriseStorageForum: Where All-Flash Storage Makes No Sense
    A bit of flash in the right place can go a long way, and everybody can benefit from at least a some of flash somewhere. Some might say the more, the better. But where you have budget constraints that simply prevent you from having more flash for things such as cold, inactive, or seldom access data, you should explore other options.
   
    Via Bitpipe: Changing With the Times - Protecting VMs(PDF)

 

    Via FedTech: Storage Strategies: Agencies Optimize Data Centers by Focusing on Storage

 

    Via SearchCloudStorage: Dell EMC cloud storage strategy needs to cut through fog
   
    Via SearchStorage: Microsemi upgrades controllers based on HPE technology

 

    Via EnterpriseStorageForum: 8 Data Machine Learning and AI Storage Tips

 

    Via SiliconAngle: Dell EMC announces hybrid cloud platform for Azure Stack

View more Server, Storage and I/O trends and perspectives comments here

Events and Activities

Recent and upcoming event activities.

Sep. 13-15, 2017 - Fujifilm IT Executive Summit - Seattle WA

August 28-30, 2017 - VMworld - Las Vegas

Jully 22, 2017 - TBA

June 22, 2017 - Webinar - GDPR and Microsoft Environments

May 11, 2017 - Webinar - Email Archiving, Compliance and  Ransomware

See more webinars and activities on the Server StorageIO Events page here.

Useful links and pages:
Microsoft TechNet - Various Microsoft related from Azure to Docker to Windows
storageio.com/links - Various industry links (over 1,000 with more to be added soon)
objectstoragecenter.com - Cloud and object storage topics, tips and news items
OpenStack.org - Various OpenStack related items
storageio.com/protect - Various data protection items and topics
thenvmeplace.com - Focus on NVMe trends and technologies
thessdplace.com - NVM and Solid State Disk topics, tips and techniques
storageio.com/converge - Various CI, HCI and related SDS topics
storageio.com/performance - Various server, storage and I/O  benchmark and tools
VMware Technical Network - Various VMware related items

Ok, nuff said, for now.

Cheers
Gs

GDPR goes into effect May 25 2018 Are You Ready?

$
0
0

server storage I/O trends

GDPR  goes into effect May 25 2018 Are You Ready?

The  new European General Data Protection  Regulation (GDPR) go into effect in a year on May  25 2018 are you ready?

 

Why  Become GDPR Aware

If your initial response is that you are not in Europe and do not need to be concerned about GDPR you might want to step back and review that thought.  While it is possible that some organizations may not be affected by GDPR in  Europe directly, there might be indirect considerations. For example, GDPR, while focused on Europe, has ties to other initiatives in place or being planned for elsewhere in the world. Likewise unlike earlier regulatory compliance that  tended to focus on specific industries such as healthcare (HIPPA and HITECH) or  financial (SARBOX, Dodd/Frank among others), these new regulations can be more  far-reaching.

 

GDPR  Looking Beyond Compliance

Taking a step back, GDPR, as its name implies, is about general data protection including how information is protected, preserved, secured and served. This also includes taking safeguards to logically protect data with passwords, encryption among other techniques. Another dimension of GDPR is reporting and ability to track  who has accessed what information (including when), as well as simply knowing  what data you have.

 

What this means is that GDPR impacts users from consumers of social media such as  Facebook, Instagram, Twitter, Linkedin among others, to cloud storage and related services, as well as traditional applications. In other words, GDPR is  not just for finance, healthcare, it is more far-reaching making sure you know  what data exists, and taking adequate steps to protect.

There is a lot more to discuss of GDPR in Europe as well as what else is being done in other parts of the world. For now being aware of initiatives such as GDPR and its broader scope impact besides traditional compliance is important. With these new initiatives, the focus expands from the compliance  office or officers to the data protection office and data protection officer  whose scope is to protect, preserve, secure and serve data along with associated  information.

 

GDPR  and Microsoft Environments

As  part of generating awareness and help planning, I’m going to be presenting  a free webinar produced by Redmond Magazine sponsored by Quest (who will also be  a co-presenter) on June 22, 2017 (7AM PT). The title of the webinar is GDPR Compliance Planning for Microsoft Environments.

 

This webinar looks at the General  Data Protection Regulation (GDPR) and its impact on Microsoft environments.  Specifically, we look at how GDPR along with other future compliance directives impact Microsoft cloud, on-premise, and hybrid environments, as well as what you can do to be ready before the May 25, 2018 deadline. Join us for this  discussion of what you need to know to plan and carry out a strategy to help  address GDPR compliance regulations for Microsoft environments.
   
  What you will learn during this discussion:

  • Why GDPR and other regulations  impact your environment
  • How to assess and find  compliance risks
  • How to discover who has access to  sensitive resources
  • Importance of real-time auditing to  monitor and alert on user access activity

 

This webinar applies to  business professionals responsible for strategy, planning and policy decision-making for Microsoft environments along with associated applications. This  includes security, compliance, data protection, system admins, architects and  other IT professionals.

 

What  This All Means

Now  is the time to start planning, preparing for GDPR if you have not done so and  need to, as well as becoming more generally aware of it and other initiatives.  One of the key takeaways is that while the word compliance is involved, there  is much more to GDPR than just compliance as we have seen in the part. With GDPR  and other initiatives data protection becomes the focus including privacy,  protect, preserve, secure, serve as well as manage, have insight, awareness along  with associated reporting. Join me and Quest on June 22, 2017 7AM PT for the  webinar GDPR Compliance Planning for Microsoft Environments to learn more.

 

Ok, nuff said, for now.

Cheers
Gs

Dell EMC World 2017 Day One news announcement summary

$
0
0

server storage I/O trends

Dell EMC World 2017 Day One news announcement summary

Recap of the first day of the first combined Dell EMC World 2017 being held in Las Vegas Nevada. Last year’s  event in Las Vegas was the end of the EMC World, while this being the first of the combined Dell EMC World events that succeeded its predecessors.

 

What this means is an expanded focus because of the new Dell  EMC that has added servers among other items to the event focus. Granted, EMC had been doing servers via its VCE and converged divisions, however with the Dell EMC integration completed as of last fall,  the Dell Server group is now part of the Dell EMC organization.

 

The central theme of  this Dell EMC world is REALIZE with a focus on four pillars:

  • Digital Transformation  (Pivotal focus) of applications
  • IT Transformation (Dell  EMC, Virtustream, VMware) data center modernization
  • Workforce transformation  (Dell Client Solutions) devices from mobile to IoT
  • Information Security (RSA  and Secureworks)

software defined data infrastructures SDDI and SDDC

What Did Dell EMC Announce

Note that while there are focus areas of the different Dell Technologies  business units aligned to the pillars, there is also leveraging across those  areas and groups. For example, VMware NSX spans into security, and  PowerEdge  servers span into other pillars as a core data infrastructure building block.

 

What Dell EMC and Dell Technologies announced today.

New 14th generation  PowerEdge Servers that are core building  blocks for data infrastructures

Dell EMC has announced  the 14th generation of Intel-powered Dell EMC PowerEdge server portfolio systems. These includes servers  that get defined with software for software-defined  data centers (SDDC), software-defined data infrastructures (SDDI) for the cloud, virtual, the container as well as storage among other applications. Target  application workloads and environments range from high-performance compute (HPC), and high-productivity (or profitability)  compute (the other HPC), super compute (SC), little data and big data analytics,  legacy and emerging business applications as well as cloud and beyond. Enhancements  besides new Intel processor technology includes enhanced iDRAC, OpenManage,  REST interface, QuickSync, Secure Boot among other management, automation,  security, performance, and capacity  updates.

 

Other Dell EMC enhancements with Gen14 include support for  various NVDIMM to enable persistent memory also known as storage class memories  such as 3D Xpoint among others. Note at  this time, Dell EMC is not saying much about speeds, feeds and other details,  stay tuned for more information on these in the weeks and months to come.

 

Dell EMC has also been leaders with deploying NVMe from PCIe flash cards to 8639 U.2 devices such as  2.5” drives. Thus it makes sense to see continued adoption and deployment of  those devices along with SAS, SATA support. Note that Broadcom (formerly known  as Avago) recently announced the release of their PCIe SAS, SATA and NVMe based  adapters.

 

The reason this is worth mentioning is that in the past Dell  has OEM sourced Avago (formerly known as LSI) based adapters. Given Dell EMC  use of NVMe drives, it only makes sense to put two and two together.

 

Let’s wait a few months to see what the speeds, feeds, and specifications are to put the rest  of the puzzle together. Speaking of NVMe, also look for Dell EMC to also supporting PCIe AIC and U.2 (8639) NVMe  devices, also leverage M.2 Next Generation Form Factor (NGFF) aka Gum sticks as  boot devices.

 

While these are all Intel focused, I would expect Dell EMC not to sit back, instead, watch for what they  do with other processors and servers including ARMs among others.

 

Increased support for more GPUs to support VDI and other  graphic intensive workloads such as video rendering, imaging among others. Part  of enhanced GPU support is improvements (multi-vector cooling) to power and cooling  including sensing the type of PCIe card, and then adjusting cooling fans and subsequent  power draw accordingly. The benefit should be more proper cooling to reduce power to support more work and productivity.

 

Flexible consumption models (financing and more) from desktop to data center

Dell Technologies has announced several financing, procurement, and consumption  models with cloud-like flexible options  for different IT and data center, along  with mobile device technologies.

These range from licensing to deployment as a  service, consumption and other options  via Dell Financial Services (DFS).

 

Highlights include:

  • DFS Flex on Demand is available now in select countries  globally.
  • DFS Cloud Flex for HCI is available now for Dell EMC VxRail  and Dell EMC XC Series and has planned availability for Q3 2017 in Dell EMC  VxRack Systems.
  • PC as a Service is  available now in select countries globally.
  • Dell EMC VDI Complete Solutions are available now  in select countries globally.
  • DFS Flex on Demand is  available now in select countries globally.
  • DFS Cloud Flex for HCI is  available now for Dell EMC VxRail and Dell EMC XC Series and has planned VxRack  systems in Q3 2017.
  • PC as a Service solution is available now in select  countries globally.
  • Dell EMC VDI Complete Solutions are available now in select countries.
  • Dell Technologies  transformation license agreement (TLA) is available now in select countries

Hyper-Converged Infrastructure (HCI), Converged (CI) and Cloud like systems

Enhancements to VxRail system, VxRACK Systems, and XC Series leveraging Del EMC Gen14  PowerEdge servers along with other improvements.  Note that this also includes continued support for VMware, Microsoft as well as  Nutanix software-defined solutions.

 

New All-Flash (ADA) SSD Storage Systems (VMAX, XtremIO X2, Unity, SC, Isilon)

Storage system enhancements include from high-end (VMAX and  XtremIO) to mid-range (Unity and SC) along with scale-out NAS (Isilon)  

Highlights of the announcements include:

  

  • New VMAX 950F all flash  array (AFA)
  • New XtremIO X2 with  enhanced software, more powerful hardware
  • New Unity AFA systems
  • New SC5020 midrange hybrid  storage
  • New generation of Isilon  storage with improved performance, capacity, density

  

Integrated Data Protection Appliance  (IDPA) and Cloud Protection  solutions

Data protection enhancement highlights include:   

  • New Turnkey Integrated  Data Protection Appliance (IDPA) with four models (DP5300, DP5800, DP8300, and DP8800) starting at 34 TB  usable scaling up to 1PB usable. Data services including encryption, data  footprint reduction such as dedupe, remote monitoring, Maintenance service dispatch, along with application integration.  Application integration includes MongoDB, Hadoop, MySQL.
        
  • Enhanced cloud capabilities  powered by Data Domain virtual edition (DD VE 3.1) along with data protection suite  enable data to be protected too, and  restored from Amazon Web Services (AWS) Simple Storage Service (S3) as well as  Microsoft Azure.
        

Open Networking and software-defined  networks (SDN) with 25G

  Dell EMC Open Networking highlights include:

  • Dell  EMCs first 25GbE open networking top of  rack (TOR) switch including S5100-ON  series (With OS10 enterprise edition software)  complimenting new PowerEdge Gen14 servers with native 25GbE support. Switches  support 100GbE uplinks fabric connectivity for east-west (management) network  traffic. Also announced is the S4100-ON  series and N1100-ON  series that are in addition to recently  announce N3100-ON and N2100-ON switches.
  • Dell EMCs first optimized  Open Networking platform for unified storage network switching including  support for 16Gb/32GB Fibre Channel
  • New Network Function Virtualization  (NFV) and IoT advisory consulting services

Note that Dell EMC is announcing the availability of these networking solutions in Dell Technologies  2018 fiscal year which occurs before the traditional calendar year.

Using Gen14 servers, several Software Defined Storage (SDS) enhancements

 

Dell EMC announced enhancements to their Software Defined  Storage (SDS) portfolio that leveraging  the PowerEdge 14th generation server portfolio. These improvements include ScaleIO, Elastic Cloud Storage (ECS), IsilonSD Edge and Preview of  Project Nautilus.

 

Where to learn more

What this  all means

This is a summary of what has been announced so far on the first  morning of the first day of the first new Dell EMC world. Needless to say, there is more detail to look at for the  above announcements from speeds, feeds, functionality and related topics that  will get addressed in subsequent posts. Overall this is a good set of announcements expanding capabilities of the  combined Dell EMC while enhancing existing systems as well as well as  solutions.

 

Ok, nuff said (for now...).

 

Cheers
Gs

Azure Stack Technical Preview 3 (TP3) Overview Preview Review (Server StorageIOlab Review)

$
0
0

server storage I/O trends

Azure Stack Technical Preview 3 (TP3) Overview Preview Review

Perhaps you are aware or use Microsoft Azure, how about  Azure Stack?

 

This is part one of a two-part series looking at Microsoft Azure Stack providing an overview, preview and review. Read part two here that looks at my experiences installing Microsoft Azure Stack Technical Preview 3 (TP3).

 

For those who are not aware, Azure Stack is a private on-premise  extension of the Azure public cloud environment. Azure Stack now in technical preview three (e.g. TP3), or  what you might also refer to as a beta (get the bits here).

 

In addition to being available via download as a preview, Microsoft is also working with vendors such as Cisco, Dell EMC, HPE, Lenovo and others who have announced Azure Stack support. Vendors such as Dell EMC have also made proof of concept kits available that you can buy including server with storage and software. Microsoft has also indicated that once launched for production versions scaling from a few to many nodes, that a single node proof of concept or development system will also remain available.

 

software defined data infrastructure SDDI and SDDC
Software-Defined Data Infrastructures (SDDI) aka Software-defined Data Centers, Cloud, Virtual and Legacy

 

Besides being an on-premise, private  cloud variant, Azure Stack is also hybrid capable being able to work with  public cloud Azure. In addition to working with public cloud Azure, Azure  Stack services and in particular workloads can also work with traditional  Microsoft, Linux and others. You can use pre built solutions from the Azure marketplace, in addition to developing your applications using Azure services and DevOps tools. Azure Stack enables hybrid deployment into public or private cloud to balance flexibility, control and your needs.

 

Azure Stack Overview

Microsoft Azure Stack is an on premise (e.g. in your own data center) private (or hybrid when connected to Azure) cloud platform. Currently Azure Stack is in Technical Preview 3 (e.g. TP3) and available as a proof of concept (POC) download from Microsoft. You can use Azure Stack TP3 as a POC for learning, demonstrating and trying features among other activities. Here is link to a Microsoft Video providing an overview of Azure Stack, and here is a good summary of roadmap, licensing and related items.

 

In summary, Microsoft Azure Stack is:

  • A onsite, on premise, in your data center extension of Microsoft Azure public cloud
  • Enabling private and hybrid cloud with strong integration along with common experiences with Azure
  • Adopt, deploy, leverage cloud on your terms and timeline choosing what works best for you
  • Common processes, tools, interfaces, management and user experiences
  • Leverage speed of deployment and configuration with a purpose-built integrate solution
  • Support existing and cloud native Windows, Linux, Container and other services
  • Available as a public preview via software download, as well as vendors offering solutions

What is Azure Stack Technical Preview 3 (TP3)

This version of Azure Stack is a single node running on a lone physical machine (PM) aka bare metal (BM). However can also be installed into a virtual machine (VM) using nesting. For example I have Azure Stack TP3 running nested on a VMware vSphere ESXi 6.5 systems with a Windows Server 2016 VM as its base operating system.

 

Microsoft Azure Stack architecture
Click here or on the above image to view list of VMs and other services (Image via Microsoft.com)

 

The TP3 POC Azure Stack is not intended for production environments, only for testing, evaluation, learning and demonstrations as part of its terms of use. This version of Azure Stack is associated with a single node identity such as Azure Active Directory (AAD) integrated with Azure, or Active Directory Federation Services (ADFS) for standalone modes. Note that since this is a single server deployment, it is not intended for performance, rather, for evaluating functionality, features, APIs and other activities. Learn more about Azure Stack TP3 details here (or click on image) including names of various virtual machines (VMs) as well as their roles.

 

Where to learn more

 

The following provide more information and insight about Azure, Azure Stack, Microsoft and Windows among related topics.

  

What this  all means

A common question is if there is demand  for private and hybrid cloud, in fact,  some industry expert pundits have even said private,  or hybrid are dead which is interesting, how can something be dead if it is  just getting started. Likewise, it is  early to tell if Azure Stack will gain traction with various organizations,  some of whom may have tried or struggled with OpenStack among others.

 

Given a large number  of Microsoft Windows-based servers on VMware, OpenStack, Public cloud services  as well as other platforms, along with continued growing popularity of Azure,  having a solution such as Azure Stack provides an attractive option for many environments. That leads to the question  of if Azure Stack is essentially a replacement for Windows Servers or Hyper-V  and if only for Windows guest operating systems. At this point indeed, Windows  would be an attractive and comfortable option, however, given a large number  of Linux-based guests running on Hyper-V  as well as Azure Public, those are also primary candidates as are containers  and other services.

 

Continue reading more in part two of this two-part series here including installing Microsoft Azure Stack TP3.

 

Ok, nuff said (for now...).

 

Cheers
Gs

Viewing all 3805 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>