Discovering VMware vCenter Server Appliance.
It runs on Suse Linux Enterprise 11.
Interesting product.
Discovering VMware vCenter Server Appliance.
It runs on Suse Linux Enterprise 11.
Interesting product.
vSphere 5.1から、「共有ストレージなしのvMotion」 ができるようになりました。
※通称、クロスホストストレージvMotionやx-vMotionと呼ばれているものです。
今回は、PowerCLI で「共有ストレージなしのvMotion」 をしてみます。
共有ストレージなしvMotion
もともと、vMotionしたいVMは
共有ディスクにVMDKファイルをおいておく必要がありました。
vSphere 5.1からは、
共有ディスクにVMDKファイルを置いておかなくても
(ESXiのローカルディスクでも)vMotionができるようになりました。
ただし、5.1の新機能なので 基本的にはWeb Clientから実行する必要があります。
そのため、従来のvSphere Clientでは、起動中のVMに対して
共有ストレージなしvMotion(ホストとデーデータストアの両方の移行)を実行することはできません。
※ただし、ホストとデータストアを別々に(2回に分けて)移行(vMotion)することは可能です。
じつは、5.1からの新機能にはWebClientだけでなく、PowerCLI 5.1 から実行できるものもあります。
この場合、PowerCLIからは従来通りvCenterに接続してコマンド実行します(WebClientサーバではなく)。
共有ストレージなしvMotion(以下x-vMotion)もPowerCLI 5.1 から実行できます。
PowerCLIで x-vMotion 手順
1. PowerCLIでvCenter5.1に接続します。
PowerCLIは、vSphereのバージョンに合わせて
バージョン5.1のものをインストールしています。
PowerCLI C:\> Get-PowerCLIVersionPowerCLI Version
----------------
VMware vSphere PowerCLI 5.1 Release 1 build 793510---------------
Snapin Versions
---------------
VMWare AutoDeploy PowerCLI Component 5.1 build 768137
VMWare ImageBuilder PowerCLI Component 5.1 build 768137
VMware License PowerCLI Component 5.1 build 669840
VMware vSphere PowerCLI Component 5.1 build 793489
今回は「192.168.5.52」というIPアドレスのvCenterに接続しています。
PowerCLI C:\> Connect-VIServer -Server 192.168.5.52
★vCenterにログインできるのユーザとパスワードを入力します。
2. 環境を確認します。
ESXi は下記の2台です。
ESXi 5.1 が2台あり、どちらもバージョンは 5.1.0 です。
ローカルデータストア(ds_esxi01とds_esxi02)だけを持っています。
PowerCLI C:\> Get-VMHost | select Name,Version,{$_ | Get-Datastore} | ft -AutoSizeName Version $_ | Get-Datastore
---- ------- ------------------
192.168.5.61 5.1.0 ds_esxi01
192.168.5.62 5.1.0 ds_esxi02
「vm01」というVMが、ESXi1号機で起動中(PoweredOn)です。
PowerCLI C:\> Get-VM | select Name,PowerState,VMHost | ft -AutoSize
Name PowerState VMHost
---- ---------- ------
vm01 PoweredOn 192.168.5.61
3. x-vMotionを実行します。
vm01 を、x-vMotion してみます。
PowerCLI の、Move-VM で、移動先(-Destination)を別のESXiにすることでvMotionすることができます。
PowerCLI C:\> Move-VM -VM vm01 -Destination 192.168.5.62 -Datastore ds_esxi02
下記のような感じで実行されます。
x-vMotion が終了すると、vm01はESXi 2号機に移動しています。
PowerCLI C:\> Get-VM | select Name,PowerState,VMHost | ft -AutoSize
Name PowerState VMHost
---- ---------- ------
vm01 PoweredOn 192.168.5.62
とくに x-vMotion専用のPowerCLI コマンドがあるわけではなく、
従来のvMotion同様、Move-VMでx-vMotionができてしまいます。
以前VMwareのBrogで見かけた「共有ストレージなしvMotionは新機能ではなくただの機能拡張だ」
的なコメントが納得できる気がしました。
以上、PowerCLI でx-vMotionする話でした。
I am using vsphere version 5, one of my mate created few VM's with thick provisioned lazy zeroed.
As its a thick provisioned, can we know know the exact usage of virtual machine disk, I mean used space and free space, as we get exact information for a thin provision disk VM.
Last week we released VMware vFabric Postgres 9.2. If you havent tried it out, do try it out it only takes 5 minutes if you have any VMware virtualization platform (vSphere ESXi, vCenter Server, VMware Workstation, VMware Fusion or VMware Player (free)).
This week as I prepare my slides and resources for the VMware Partner Exchange boot camp on Feb 28 (Thursday) at 8:30AM titled:
CAS1503 - High Availability, Replication and Read Scaling with Virtualized Postgres
I realized that I need to first introduce the typical use cases in a datacenter which helps as the backdrop for the presentation.
Currently databases are considered to be the most battle tested platforms in any data center. As enterprises move towards virtualization, production databases are generally the last applications to be virtualized. PostgreSQL is no exception. Many enterprises do the "Fork-Lift" upgrade whenever change a platform. Today we introduce ways specially with VMware vFabric Postgres (vPostgres) 9.2 which is based on PostgreSQL 9.2 just as concrete example of how Fork-Lift upgrade are things of the past and "Hybrid" existence is the current preferred way.
So lets consider that an enterprise A has some production installations of PostgreSQL databases running on physical machines using Linux 64-bit. What would be the right way to even think of virtualize that workload.
STEP 1: Begin with Physical Database to Virtual Database Replication
vPostgres 9.2 uses the PostgreSQL 9.2 core with the same database file format. It also uses the same streaming replication of PostgreSQL 9.2 allowing streaming replication to vPostgres 9.2 to work (as long as the minor verions - third digit also matches on both sides)
PostgreSQL 9.2 (physical) -> vPostgres 9.2 (virtual)
Also vPostgres 9.2 ships Linux RPM packages for RHEL 6.1 or greater, SLES 11 SP2 so if you have an existing PostgreSQL 9.2 database files then it can be even installed on RHEL on physical instances to replicate to vPostgres on a virtual instance.
vPostgres 9.2 (physical) -> vPostgres 9.2 (virtual)
The obvious questions are (a) Why should I use Replication? and (b) Why should I use Virtualization?
Lets answer them seperately first
(a) Why should I use Replication?
Postgres replication gives two main benefit - HA/DR and Read-Scaling
HA/DR - High Availability and Disaster Recovery is acheived by having a Hot Standby version of the production database always available to take over when "stuff" happens. Postgres gives various options of achieving SLAs depending on priorities (response time or data super guarantee) using synchronous streaming replication or asynchronous replication.
Also when organizations typically use Hot Standby, they do not want the standby server to be idle. They also want to be offload read-only queries to some extent to the standby server thus freeing up write contentions in the master (specially in asynchronous mode) and thus allowing the master database to handle more write requests. In fact many online retailers end up using more than 1 read servers since most of their users browse a lot compared to actually using their credit cards. In such scenario it is frequently observed to be using more than 1 read servers and that too located closer to the application servers using the read data.
(b) Why use I use Virtualization for Replications?
Ideally a database replica slave server resources should match the master database server resource specification. This is needed to make sure that the replica can always catch up with the transactions happening on the master database. However always doing that is not economical. One would want the flexibility to set the same resources but if required be able to tweak the resources for running some end of the month job and then get back to the regular resource allocation. Plus Virtualization allows a better platform to even test out HA/DR without completely wasting the replica.
(Well the real question is why VMware Virtualization. :-)
VMware vSphere 5.1 allows a virtual machine to be as big as 64 vCPUs with 255GB-1TB of RAM for the virtual machine. This allows your database replica to always catch up to your Master. Also using linked clones it is easier to test out DR testing in promoting the slave to a new master.
Further using VMware vFabric Postgres 9.2 Virtual Appliance it is even easier to just tweak the resource settings and then it automatically recalculates the resources with the appliance to make sure the vPostgres database is always running at optimum values for those resource settings.
Also PostgreSQL 9.2 now allows cascading replication which means you could have
PostgreSQL 9.2 or vPostgres 9.2 (physical) -> vPostgres 9.2 (sync or asnyc replication) -> vPostgres 9.2 Read Slaves
STEP II: Migrating to Virtual Database to Virtual Database Replication
Once STEP I has shown enough operational evidence of running Postgres database in virtual environment or if a hardware failure hits the physical host, then vPostgres 9.2 could be promoted instantly to master using a command line operation and now it will be completely in a Virtual Database to Virtual Database Replication.
VMの細かな設定をPowerCLIで変更してみます。
今回は vSphere 5.1 とPowerCLI 5.1 で実行していますが、
以前のリリースでも同様の設定が可能です。
たとえば、VMのリソース設定で
「すべてのゲストメモリを予約(すべてロック)」 というものがあります。
(以下、全メモリ予約)
この設定を有効化すると、VMが必要としたときに
必ず設定値までのメモリを割り当てられることが保証されます。
ただし、このような詳細な設定は PowerCLIのSet-VM では設定できません。
Set-VMではできない設定は、下記のように変更できることがあります。
PowerCLI C:\> $spec = New-Object VMware.Vim.VirtualMachineConfigSpec
PowerCLI C:\> $spec.MemoryReservationLockedToMax = $true
PowerCLI C:\> $vm = Get-VM vm01 | Get-View
PowerCLI C:\> $vm.ReconfigVM($spec)
★vm01 は仮想マシン名です。
コマンドの説明
1. 設定前の状態確認
まず、最初に vm01(今回の設定対象VM) は
全メモリ予約 が無効な状態です。
PowerCLI C:\> (Get-VM vm01 | Get-View).Config.MemoryReservationLockedToMax
False ★デフォルトは無効(False)。
2. VMの設定変更時に指定する「設定情報のオブジェクト」を作成します。
PowerCLI C:\> $spec = New-Object VMware.Vim.VirtualMachineConfigSpec
3. 全メモリ予約 を有効にする設定を含めます。
「設定情報のオブジェクト」に対して、全メモリ予約 を有効にする設定を追加します。
この時点ではまだ vm01 には反映されません。
PowerCLI C:\> $spec.MemoryReservationLockedToMax = $true
PowerCLI C:\> $spec.MemoryReservationLockedToMax
True ★設定情報オブジェクトでは有効化。vm01に未反映。
4. 設定情報オブジェクトをもとに、VMを再構成します。
ここで「3」で作成した 設定情報オブジェクト($spec) を指定して、
VMを再構成(ReconfigVM)します。
PowerCLI C:\> (Get-VM vm01 | Get-View).ReconfigVM($spec)
5. VMの設定確認
VMのメモリ全予約設定が有効(True)になりました。
PowerCLI C:\> (Get-VM vm01 | Get-View).Config.MemoryReservationLockedToMax
True ★vm01に全メモリ予約の有効化設定が反映された。
vSphere Clientでも設定が有効になったことが確認できます。
【参考情報】
VirtualMachineConfigSpecで設定できる項目は下記です。
PowerCLI C:\> $spec = New-Object VMware.Vim.VirtualMachineConfigSpec
PowerCLI C:\> $spec ★オブジェクトの内容を表示する。
ChangeVersion :
Name :
Version :
Uuid :
InstanceUuid :
NpivNodeWorldWideName :
NpivPortWorldWideName :
NpivWorldWideNameType :
NpivDesiredNodeWwns :
NpivDesiredPortWwns :
NpivTemporaryDisabled :
NpivOnNonRdmDisks :
NpivWorldWideNameOp :
LocationId :
GuestId :
AlternateGuestName :
Annotation :
Files :
Tools :
Flags :
ConsolePreferences :
PowerOpInfo :
NumCPUs :
NumCoresPerSocket :
MemoryMB :
MemoryHotAddEnabled :
CpuHotAddEnabled :
CpuHotRemoveEnabled :
VirtualICH7MPresent :
VirtualSMCPresent :
DeviceChange :
CpuAllocation :
MemoryAllocation :
LatencySensitivity :
CpuAffinity :
MemoryAffinity :
NetworkShaper :
CpuFeatureMask :
ExtraConfig :
SwapPlacement :
BootOptions :
VAppConfig :
FtInfo :
VAppConfigRemoved :
VAssertsEnabled :
ChangeTrackingEnabled :
Firmware :
MaxMksConnections :
GuestAutoLockEnabled :
ManagedBy :
MemoryReservationLockedToMax :
NestedHVEnabled :
VPMCEnabled :
ScheduledHardwareUpgradeInfo :
DynamicType :
DynamicProperty :
以上、PowerCLIでVMの詳細な設定を変更してみる話でした。
Author: Colm Keegan, Lead Analyst at Storage Switzerland LLC.
Shared storage, whether it be networked attached storage (NAS) or a storage area network (SAN), has become an essential component of any IT environment – large or small. While the cost of shared storage technology has dropped significantly over the last 10 years the costs of physical storage solutions are still prohibitive in achieving remote site redundancy. In addition to high Capex and Opex costs, the complexity of implementing and managing physical storage solutions can be daunting for even the most seasoned IT professionals.
In most instances the storage capacity available internal to the physical hosts will suffice for supporting on-site applications. The challenge for organizations managing multiple remote sites is that physical storage solutions require high capital investment for acquisition, deployment, management and maintenance. These physical solutions re-introduce complexity into virtualized environments and are a single point that introduces higher risk of extended downtime.
Author: Colm Keegan, Lead Analyst at Storage Switzerland LLC.
Shared storage, whether it be networked attached storage (NAS) or a storage area network (SAN), has become an essential component of any IT environment – large or small. While the cost of shared storage technology has dropped significantly over the last 10 years the costs of physical storage solutions are still prohibitive in achieving remote site redundancy. In addition to high Capex and Opex costs, the complexity of implementing and managing physical storage solutions can be daunting for even the most seasoned IT professionals.
In most instances the storage capacity available internal to the physical hosts will suffice for supporting on-site applications. The challenge for organizations managing multiple remote sites is that physical storage solutions require high capital investment for acquisition, deployment, management and maintenance. These physical solutions re-introduce complexity into virtualized environments and are a single point that introduces higher risk of extended downtime.
ESXi で稼働するVMに対して、ゲストOSにインストールするライセンス制限の都合などで
CPUアフィニティ(VMの使用CPUを固定する)をかけることがあります。
ためしに、PowerCLI(5.1)でCPUアフィニティを設定してみました。
設定方法
今回、ESXiには8CPU(Cpu0~Cpu7)があり、
それを2つのVM(vm01,vm02)に4vCPUずつ割り当ててみます。
★vm01に、CPU0~3 をわりあて。
PowerCLI C:\> Get-VM vm01 | Get-VMResourceConfiguration | Set-VMResourceConfiguration -CpuAffinityList 0,1,2,3
★vm01に、CPU4~7 をわりあて。
PowerCLI C:\> Get-VM vm02 | Get-VMResourceConfiguration | Set-VMResourceConfiguration -CpuAffinityList 4,5,6,7
★CPUアフィニティ設定を確認します。それぞれのVMにCPUが割り当てられています。
PowerCLI C:\> (get-vm * | get-View)| select Name,{$_.Config.CpuAffinity.AffinitySet}Name $_.Config.CpuAffinity.AffinitySet
---- ---------------------------------
vm01 {0, 1, 2, 3}
vm02 {4, 5, 6, 7}
CPUアフィニティは、
下記のようなオプションで割り当てることもできます。
※「-CpuAffinityList」ではなく、「-CpuAffinity」になります。
PowerCLI C:\> Get-VM vm01 | Get-VMResourceConfiguration | Set-VMResourceConfiguration -CpuAffinity Cpu0,Cpu1,Cpu2,Cpu3
PowerCLI C:\> Get-VM vm01 | Get-VMResourceConfiguration | Set-VMResourceConfiguration -CpuAffinity Cpu4,Cpu5,Cpu6,Cpu7
ちなみに、CPUアフィニティを解除するためには、
「Set-VMResourceConfiguration -CpuAffinity NoAffinity」とします。
PowerCLI C:\> Get-VM vm01 | Get-VMResourceConfiguration | Set-VMResourceConfiguration -CpuAffinity NoAffinity
注意点
CPUアフィニティはVM単位です。
そのため、1つのVMに設定しても、ESXi上のそれ以外のVMはデフォルトですべてのCPUを使用します。
試しに、ESXi にログインして、 esxtopで確認してみました。
esxtop起動 → 「大文字V(VMだけ表示)」→「f とiを押す(表示フィールド選択)」
とすると、CPUアフィニティの割り当て状況(AFFINITY_BIT_MASK) の列が確認できるようになります。
CPUアフィニティをかけていない状態ではすべてのCPUが使用されます。(0~7)
vm01にだけCPUアフィニティ(CPU 0~3)をかけた状態です。
vm02は、まだすべてのCPU(0~7)を使用しています。
vm01とvm02それぞれにアフィニティをかけた状態です。
VMが、別のCPUを使用している様子が見えます。
ちなみに、CPUアフィニティをかけると
その分ハイパーバイザの自由なCPUスケジューリング(vCPUへのCPU割り当て)を邪魔してしまうらしいです。
管理面や性能面(ゲストのライセンス制限以外の面)では、
CPUアフィニティをかけない方がよいケースが多そうです。
CPUアフィニティ設定確認については、こちらも参考にしてください。
以上、PowerCLIでVMにCPUアフィニティをかけてみました。
There has been last Saturday a PostgreSQL event called PG-unconf (Postgres unconference). The main characteristics of an unconference are that the schedule and the talks are decided only when the conference begins, and the content is entirely decided by the participants. You can talk about what you want regarding Postgres, the only restriction being a time limit of 20 minutes.
Going there was a good occasion to discuss a bit about vFabric Postgres and the new 9.2 GA release and get some feedback about the utilisation of Postgres done with this product. The presentation document is attached to this post. A short demonstration of the appliance with VMware Fusion has been done, and an explanation has been given about the interactions of vFabric Postgres with vSphere for example regarding HA and VM management.
The presentation was well-received, with positive feedback. It was mentioned by the audience that it is great to have a tool that can facilitate management of PostgreSQL visual machines in an appliance that contains already automatic memory settings on the DB side, SSL settings, server management interface and database interface. It was also mentionned that is also good to be able to migrate existing Postgres application to vFabric Postgres appliance by using the existing community client libraries. A last thing, there were some people in the audience who actually use ESX servers with vSphere, and they have been highly interested in the performance and management gain that can be achieved by using vFabric Postgres with the existing VMware technologies.
So, good stuff and nice day.
Currently I am working on a project to virtualize a dentist office application server which uses Patterson Eaglesoft. The application runs great in a Windows Server 2003 virtual machine on ESXi using a Dell server.
検証環境で新たにView 5.1を構築、Windows 8 Linked Cloneプールを展開したところ、下記のようなComposerエラー発生しました。
View Composer Fault: VC operation exceeded the task timeout limit of 0 mins set by the View Composer for VMプール名
■ 原因
Composerがレプリカー作成時、時間がかかり、タイムアウトになった為のようです。
KB2030047の説明だと、
でこのようなComposerタイムアウト発生するとのことでした。
■ 解決方法
KB2030047の説明通り、Composerのタイムアウト値を増やします。(デフォルト60分)
それでも上記エラーが解消されない場合は、ストレージのパフォーマンスを確認、増強が必要とのことでした。
自分の場合は"①新しい検証環境" "②展開するストレージはView専用で他の用途での利用はしていない" ことから上記の方法では解消されませんでした。
再度確認してところ、展開するADのOUへの権限が設定されていなかった。 汗
といことで、展開先OUへの権限を設定し解消しました。
Originally posted at: http://www.virtxpert.com/a-case-study-in-being-cheap-resilient-infrastructure-design-using-commodity-hardware/
Having worked with mostly smaller companies through out my career, I have noticed that most tend to spend money on their infrastructure in the wrong places (in my opinion). I have walked into many data centers to see high end Dell or HP servers with redundancy tucked into every corner of the server only to find all the network interfaces connected to a single consumer grade (small office home office) level switch with a single power supply daisy chained to another SOHO switch (typically from a different vendor or model with different firmware or functionality) which is then connected to other network infrastructure that is also of a lower quality. All that money spent on a really nice server, only to have it hooked up to a less than adequate infrastructure is a shame and clearly showed a lack of understanding of the business needs. This led to an idea that I have been building in my head for some time – to build a resilient, yet cheap infrastructure with with low end commodity hardware (even from the used or refurb market) in certain areas and higher end infrastructure in others.
With Paul Maritz’s recent comments about building consumer grade enterprises, and a discussion that was started on Twitter, I decided to follow the rabbit down the hole and see where I ended up. Time (and money) will not permit me to do a real world test of the design I am writing about here, and we may even find the cost benefit is not present to “cheap out” in certain areas. Also, as always, the designs and decisions here are not applicable in all scenarios; before embarking on any project you need to clearly understand the needs of the business, the applications its using and its use cases and SLA’s (service level agreements) the business has with its customers. For example, I would not advocate doing something like this in a hospital or financial or e-commerce organization that relies on real time transactions and high availability. Finally, I am going to focus on just servers. I do not believe that network infrastructure such as firewalls or switches, or the Storage Area Network (SAN) are places you should be looking to “go cheap.” There certainly may be ways to save money in those areas but going to a cheap/refurbished item should not be one of them.
For the purposes of this article, we are going to follow and (semi) fictitious company called MySoftware, Inc. MySoftware, Inc. is a software development company that has created a web based application it plans to deliver to its customers in a Software-as-a-Service (SaaS) model. The application is a typical multi-tier application consisting of a web front end, application middleware and database. The data being stored in the application by the customers is HR data, about the employees of MySoftware, Inc’s customers. Since this is a multi-tenant model, the web servers for the front end web UI are load balanced and shared among all customers, no data is stored on these web servers other than basic HTML, CSS, Javascript etc… New web servers can be brought online and decommissioned with no impact on the customer by a simple change on the load balancer. Similarly, the application tier is not storing data and has been written such that a failure to any single application node can be sustained with minimal interruption to user sessions. The database layer is where the majority of the data processing is handled and saved when it is passed through from the application tier.
MySoftware, Inc. wants to building a highly available infrastructure but also limit costs for this infrastructure. It has been determined that Infrastructure as a Service (IaaS or “cloud”) hosting is not a viable long term option as the costs will not fit their business model long term with their projected growth and number of servers required to satisfy application and customer demand. The infrastructure will be based on VMware vSphere Enterprise Plus and it has been determined that 18 hosts with at least quad-core 2.0GHz processors and 64GB of RAM will be sufficient for the application to run during peak times based on load testing projections and support growth over the next 12-18 months, 8 of those hosts will be dedicated to the database layer, the remaining 10 will be shared between VM’s running the web and middleware layers. Based on the VMware consultant’s design, the cluster can can support the loss of between 2-6 hosts (be it unplanned downtime or maintenance) at any given time based on customer usage cycles. The previous server consultant had suggested current model HP servers. The owner of MySoftware, Inc. has asked you to find a way to reduce the cost associated with the implementation.
Component | HP DL380p G8 (New) | HP DL380 G5 (Used) |
---|---|---|
HBA | 2x 2-port 16GB FC SN1000E | 2x 2-port 16GB FC SN1000E |
Internal Storage | 2x 146GB 15K SAS | 2x 72GB 10K SAS |
Memory | 64GB PC3-10600R (4x16GB) | 64GB PC2-5300 (8x8GB) |
Network | 12x Total (4-port On-board, 2x 4-port NC365T) | 10x Total (2-port On-board, 2x 4-port NC365T) |
Processor | Intel Xeon E5-2608 @ 2.40GHz (quad-core) | Intel Xeon E5430 @ 2.66GHz (quad-core) |
Total Server Cost | $ 11350 | $8880 |
As you can see from the table here, you can save $2470 per physical host by going for a refurbished host rather than a new host. Obviously there are new CPU features that are available in the G8 model that are not in the G5 but if you have to cut costs, would you rather it be in a CPU feature that you may not be using or in redundancy for your switching infrastructure? Spread across the 18 hosts you have a cost savings of $44,460. Now based on the size of this project (18 hosts), this project should easily be pushing $250,000 if not $500,000 so I would certainly buy the argument that a savings of $45,000 is somewhat insignificant given that these servers will be running your business.
In much smaller companies, the cost savings could be much different, maybe you have a small company who only needs 4-5 hosts to run internal infrastructure, here the cost savings of $2470 per physical server (assuming we remove the FC HBA’s, and keep everything the same) could easily help give you the budget for better SAN or network equipment the customer may have not otherwise had the budget for.
What do you think? Does saving $45,000 justify buying refurbished server equipment when you have a project whose total budget is in the $500,000 range (roughly a 10% savings)? It seems to me the cost benefit clearly goes towards the SMB, assuming the use that money in other areas.
Hi I'm Adem, I like to read about Product reviews and I live in Lebanon, these days I'm searching gather information about Check it Out..
Good Afternoon, I am working as a Architect and I like Cats. Since yesterday night I want to get gather data about complete learn more
We were facing issues while adding Array Manager in SRM 5.0 . PFB screenshot for the same. We performed following steps and the resolution
Setup
VMware SRM 5.0NetApp FAS 3070 vfilerNetApp SRA 2.0.1
Troubleshooting Steps
Resolution
On NetApp “httpd” setting were changed
From
httpd.access legacy
httpd.admin.access legacy
httpd.admin.enable off
To
httpd.access host=*
httpd.admin.access host=*
httpd.admin.enable on
We were facing issues while enabling Array Pairs in SRM 5.0 . PFB screenshot for the same. We performed following steps
Setup
VMware SRM 5.0NetApp FAS 3070 vfilerNetApp SRA 2.0.1
Error
"Internal error: std::exception 'class Dr::Xml::XmlValidateException' "Element 'SourceDevices' is not valid for content model: '(SourceDevice,)'"."
Resolution
This is a bug confirmed by NetApp, Bug ID: 642115
The simple workaround is to either add the volumes manually in the include list or fall back to SRA 2.0.0
Gennemgang af Horizon suiten, med hovedvægt på Horizon Workspace.
Kort oversigt over elementerne i Suiten, gennemgang af Workspace og ikke mindst demo af Horizon Workspace.
vCenter上からHyper-Vが管理できるMulti-Hypervisor Managerのマイナーバージョンアップ 1.1のパブリックベータがリリースされました。
1.1では新たに①Windows Server 2012のHyper-Vサポート、②vCenter Converter Standaloneとの連動でvCenter上からHyper-VのVMをマイグレーションできるようになりました。
あとサポート可能なHyper-Vホスト数が20台から50台に拡張されましたし、UIやプラグイン機能の改善もあったようです。
より詳しい内容は、リリースノートをご確認ください。
1.0がインストールされている場合、アップグレードインストールも可能です。
MHM 1.1 Betaのダウンロードはここで。
Cartier have now modernised this very vintage ceremonial by restoring the chastity band with a peak of the variety conceived bracelet and permitted the bracelet to be locked to the wrist with a golden screwdriver. So the standard of being trustworthy is centered to the cartier bracelets love. I accept as factual it is a magnificent part of jewelry and has a pleasant adoring standard behind both its giving and receiving. The alternative of cartier bracelets love are rather staggering and is supplied by all the premier jewellers. They even manage one made from cowhide and furthermore have a charm bracelet selection. They furthermore manage a very pleasant buckle bracelet and that is a rather well liked choice.
Welcome to our website for more information: http://925loveshop.com/Cartier-Bracelets-category-4.html
My interoperability test results for products(VDP 5.1, VSA 5.1) on vSphere 5.1
Deployment-1:
VDP plug-in registered to webclient and is deployed on a FC datastore.
VSA registered to thick-client(C#).
Test VMs that would be backed up are on VSA datastore.
Deployment-2:
VDP plug-in registered to webclient and is deployed on a VSA datastore.
VSA registered to thick-client(C#).
Test VMs that would be backed up are on VSA datastore.
Test Results: (GREEN - "Pass" , RED - "Fail")