- RSS Channel Showcase 4952621
- RSS Channel Showcase 5623498
- RSS Channel Showcase 8807732
- RSS Channel Showcase 2440112
Articles on this Page
- 02/17/18--02:27: _Re: Compute mode vs...
- 02/17/18--03:00: _Re: Compute mode vs...
- 02/17/18--03:39: _Re: Compute mode vs...
- 02/17/18--05:08: _Re: Compute mode vs...
- 02/17/18--06:13: _Re: Compute mode vs...
- 02/17/18--06:59: _Re: Compute mode vs...
- 02/17/18--07:28: _Re: Compute mode vs...
- 02/18/18--10:39: _Re: Balancing VDI s...
- 02/18/18--11:48: _Certificate question
- 02/19/18--01:36: _Re: Certificate que...
- 02/19/18--09:32: _Datastore Clean-up ...
- 02/19/18--21:42: _Re: Datastore Clean...
- 02/20/18--04:36: _viewdbchk on Horizo...
- 02/20/18--05:12: _Re: viewdbchk on Ho...
- 02/20/18--05:31: _Replicating appstac...
- 02/20/18--06:01: _Re: Replicating app...
- 02/20/18--06:14: _Re: Replicating app...
- 02/20/18--06:33: _Re: Replicating app...
- 02/20/18--03:45: _os permissions for ...
- 02/20/18--07:06: _Re: Datastore Clean...
- 02/17/18--02:27: Re: Compute mode vs graphics
- 02/17/18--03:00: Re: Compute mode vs graphics
- 02/17/18--03:39: Re: Compute mode vs graphics
- 02/17/18--05:08: Re: Compute mode vs graphics
- 02/17/18--06:13: Re: Compute mode vs graphics
- 02/17/18--06:59: Re: Compute mode vs graphics
- 02/17/18--07:28: Re: Compute mode vs graphics
- 02/18/18--10:39: Re: Balancing VDI storage during provisioning
- 02/18/18--11:48: Certificate question
- 02/19/18--01:36: Re: Certificate question
- 02/19/18--09:32: Datastore Clean-up Question
- 02/19/18--21:42: Re: Datastore Clean-up Question
- 02/20/18--04:36: viewdbchk on Horizon 7.3.2
- In Horizon 7 version 7.2 or later, the viewDBChk tool will not have access to vCenter or View Composer credentials and will prompt for this information when needed.
- 02/20/18--05:12: Re: viewdbchk on Horizon 7.3.2
- 02/20/18--05:31: Replicating appstack between DCs
- 02/20/18--06:01: Re: Replicating appstack between DCs
- 02/20/18--06:14: Re: Replicating appstack between DCs
- 02/20/18--06:33: Re: Replicating appstack between DCs
- 02/20/18--03:45: os permissions for View 6.2 PowerCLI scripts
- 02/20/18--07:06: Re: Datastore Clean-up Question
Yes, both KB doesn't give advice on how to configure "Compute Mode" properly. But the lspci does show that the card is in "compute mode" given it shows 0302.
Did you add a virtual video adapter that to the VM that uses the VMware video driver? From the looks of it, the VM is looking for a graphic card to boot up with and it is looking for the Nvidia card that was configured for video display passthrough which is now in compute mode.
For compute capabilities on the consumer GeForce line on a physical PC/laptop, it can function as both at the same time (as graphic display and compute capability can still be used). An application would use CUDA (usually) for the compute capability on the Nvidia card.
Thanks for fast reply.
Sorry I have lack of knowledge when it comes to this, so please educate me
I have set the Nvidia M60 card in compute mode as you can see on the ESXi host, but what I am not sure about is the video settings when deploying a VM that should use compute mode.
Should I configure it like this ?
Or should I configure it like this ?
I don`t know if I need to use vGPU profiles on Compute mode, if that's not the correct way ?
Thanks again for reply.
I think you know more about GPU passthrough than I do. It does not seem easy to find documentation about GPU passthrough as compute node.
But I would think you need to add a virtual graphics adapter that does not use the M60 passthrough. Therefore the CentOS VM should boot up using the vmwgfx driver instead of trying to load an nvidia driver that looks for the M60 card that is already in compute mode. Perhaps you add the M60 as "Other device" for passthrough so that the CentOS VM sees as it as well and try to verify using lspci inside the CentOS VM.
Well I don`t understand the whole picture
To me it seems like I have to do the following
1. Set the ESXi host in Compute mode
- This is done
2. Configure the M60 as pass-thru mode
- This has now been done, was not done before on the other posts, I believe it has to be configured as pass-thru mode but not sure ? And as you can see from the image below the VM has assign the M60 card.
As you can see from the image below, I add the M60 card as a PCI Device and not Shared PCI device as I believe this is correct since its pass thru
No error so far, but when I now try to boot the VM I get the following error
I guess there are some simple steps that I am missing, or doing wrong.
For example what I don`t know is that should I use Compute mode with Pass Thru, or will it only work with Graphics mode and Pass Thru
Is the error still the same?
The differences between "Compute" and "Graphics" mode are documented here
So there is no ECC and have a smaller BAR address space (256MB vs 8GB) if it is "graphics mode" and legacy mode is disabled in "compute mode". Legacy mode disabled will probably not work with VM virtual firmware that is BIOS. A large BAR address space will require the VM to use EFI as its virtual firmware and use 64bit MMIO.
I have a hunch that it is not an absolute requirement that the Tesla M60 has to be set to "Compute Mode" in order for the compute capabilities to work (just like CUDA applications work in GeForce cards that are also used as display device in physical PCs/laptops). If you know the GPU compute application is using CUDA, you can download and try any of the CUDA samples from the Nvidia website (although I think you have set up a build environment and build the samples). Or you could ask the users to supply you with the CUDA samples as a test to check whether the CentOS VM recognises it as a CUDA compute device aside from being used as a display device.
Not quite same error message, but the vm will not start.
If I configure the ESXi host in Compute mode, and configure the M60 card as shared, and add it as a shared pci device, select vGPU profile Q8, then tries to boot the vm it fails
If I configure the ESXi host in Compute mode, and configure the M60 card as pass thru, and add it as a pci device, then tries to boot the vm it fails
If I configure the ESXi host in Graphics mode, and configure the M60 card as shared, and add it as a shared pci device, select vGPU profile Q8, then the vm boots ok, but not sure if it then can be used with CUDA applications
If I configure the ESXi host in Graphics mode, and configure the M60 card as pass thru, and add it as a pci device, then the vm boots ok, but not sure if it then can be used with CUDA applications
When I configured it with compute mode, I did not change any thing spesific I only run the command gpumodeswitch --gpumode compute and rebooted the host, do you believe I have to do these things as shown below ?
When I right click the VM, check Bios it said "BIOS" end not "EFI"
I am really out of my knowledge regarding this...hehe.
Your are referring to CUDA applications / Samples from Nvidia, is there a easy way to test if the VM is configured for CUDA for example a simple .exe application that will start and say "Yes this machine will work with CUDA applications" hehe ?
You can't switch between BIOS and EFI for the virtual firmware for a VM as it won't be bootable anymore.
It looks like you can set the mode of the individual GPUs in the Tesla M60 separately.
As an alternative is you can create a new CentOS VM that uses EFI virtual firmware.
You can test the existing CentOS VM with BIOS virtual firmware and with 1 GPU inside in the Tesla M60 set as graphics mode while you use the other GPU in compute mode with the CentOS VM with EFI virtual firmware. This way you can move forward with testing GPU compute with the existing VM while you create another CentOS VM using EFI. You might end up being able to compare difference in performance (if any exists) for the GPU compute application if the Tesla M60 is in graphics or compute mode.
The CUDA SDK has some samples but a build environment has to be set up, and the samples have to built. But given CUDA applications works for lower priced GeForce as both display and CUDA compute, it is unlikely a much more expensive Tesla M60 would not have the capabilities of a lower-priced GeForce card. A lot of the ready to install and download demos from the Nvidia are for rendering demonstrations (so not necessarily CUDA compute) and usually available only for Windows platforms.
Yes this will work.
Horizon should balance the datastores during provisioning as well, but if you want VM's 01-04 on DS1 and VM's 05-08 on DS2 then your process will meet that requirement. Remember, if you don't add DS1 back in to the pool's datastores then any further machines you provision will only create on DS2, which may ruin your balancing work and over subscribing the datastore.
We a configuring a Horizon View in a closed network, no access to internet.
I was wondering about the certificate that is created under installation, as I understand this certificate should be replaced since this system will be production.
Do we need to create a own CA system within our domain for this ? Or is it possible to create a CSR and request some kind of certificate from a certificate vendor like godaddy.com ? or will this fails since we don`t have access to internet and it will then not be able to verify the chain ?
I don`t have much knowledge when it comes to certificates
Thanks for reply
you can use a certificate of your own CA as well as a valid certificate from any other public CA.
If you use your own CA you should make sure that your root certificate is trusted by the clients. If not then they would get the error as now with the self signed one.
If you request a certificate please make sure you read the guide (obtaining a signed ssl certificate) regarding e.g. key length.
CRL should be in place but is not needed. If you can't reach the CRL and Horizon displays the servers red/faulty in admin gui please check this KB: VMware Knowledge Base
We are running Horizon View 7.0.1.
I was looking in a datastore that is exclusive to linked clones. I noticed all these folders with log files for linked clones that no longer exist. I checked the ADAM database and found no information on the machines. Should I run an SviConfig cleanup or could I just delete the folders? I should say that the machines do not show up in View Administrator.
Thanks for the help.
Does this sounds like what you saw? In addition to rebalance operations we are seeing this when VMs are deleted.
We use viewdbchk every now and again to cleanup pools in Horizon, when the View Admin console is having issues doing such. Just like everyone else. However, since the move to 7.3.2, we can't seem to run it.
According to the release notes there was a change:
This is fine, we know what credentials to use. When we run the command though ("viewdbchk --removeMachine --machineName Xyz --noErrorCheck"), and are prompted for our service account password for vCenter (https://vcenter.fqdn:443/dsk) and for Composer (https://composer.fqdn:18443), when we enter, no characters show not even masked (this might be by design), and when we hit enter to confirm anyway, we are greeted with "ERROR: Cannot get password for user "service_account".
We know the password, we even tested it separately. Anything we can provide within the command differently perhaps?
Thanks in advance.
Oddly enough, when a co-worker of mine logs into the same connection broker as me, and runs the same command as me, he gets no prompt for the credentials, and viewdbchk runs perfectly.
I was reading documentation about appstacks and non-attachable storage, but I still don't really know how to move appstack between DCs. If I have FC storage then I can not present a lun to both datacentres that are on different continents with only IP connectivity.
So I guess that you need some storage which support array level replication over IP so you basically have non-attachable storage LUN that is in each DC presented from local array.
Other two options that comes to mind are:
- use some SW like veeam and replicate appstack VMDKs. Can you even replicate VMDKs that are not part of a VM ?
- use some VM where you do NFS export which is then mounted on vsphere in both DCs. This is questionable how would NFS datastore work over something like 200ms latency.
Any better ideas ?
I did not try it yet but maybe this could help: App Volumes Backup Utility
For backup purposes, we use NFS to copy the files between datastores. You can probably do the samething in your scenario.
Other possible solutions for you:
Take a look at Storage groups within App Volumes
You can also manually copy them:
This looks more like one time migration, I need to regularly copy appstack from one DC to the next so we do not have to do packaging process twice.
Answer to this seems to be storage groups and non-attachable storage which server as a swing lun between DCs, as per vmware refrence arhitecture guide. Only problem is that this non-attachable datastore must be presented on both DCs which is only feasible if you have them relatively close and well connected.
I cannot find information about Windows os permissions nedded for running Horizon View 6.2 PowerCLI scripts.
When a user have Local Administrators permission it works well.
But when a user don't have Local Administrators permission it ends with following error:
View Server connect FAILED
+ CategoryInfo : NotInstalled: (vmware.view.powershell.cmdlets.GetUser:GetUser) [Get-User], Exception
+ FullyQualifiedErrorId : Node Manager not running,vmware.view.powershell.cmdlets.GetUser
+ PSComputerName : xxxxxxx
Does anybody know what minimum os permissions is needed for running Horizon View 6.2 PowerCLI scripts?
Thanks for any help.
Very similar. I see a message in the log file that vMotion was successful.
Thanks for your help.