Genie Community Forum

Genienlp wants a NVIDIA driver

I am trying to set up Genie to use my local copy of Almond server. In the process, I am getting familiar with the tools. I ran genienlp so I could download the embeddings and received the message:

Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx

I guess I have to run out and buy an NVIDIA card if I want to build models locally?
Meanwhile, I can’t seem to find the documentation to make Genie to know about my local server.

Cheers,
Andrew

Hey,

Genie should work without issues on CPU. This should be enough to run a model locally for inference (if a bit slow).

For training, we rely on PyTorch to make use of the GPU. I only know of CUDA (Nvidia) support for PyTorch, but if you look it up you might find resources to run on AMD using ROCm. I am not familiar with those configurations, and they might be OS specific. We also load some NVidia-specific library for automatic mixed precision, and I think those might be causing the warning, but it should be harmless.

As for configuring Almond to talk to a local model, I recommend using almond-server for this use case. almond-server is configured using environment variables. Set THINGENGINE_NLP_URL in your environment to a file:/// URL containing the absolute path of a trained model, or a http:// URL pointing to a running genie server serving the model.
Other commands in Genie can also be pointed to a local model, typically by passing a file:/// URL in place of a model HTTP URL.

Hi Giovanni:

Thanks for the response. This is my fault but I’m starting to get confused. In part because I am jumping from install.md to install.md

I currently have three different directories. One for Almond-server, one for Thingpedia, and one for genie. I am going to restart and put these directories under one directory “almond-development.” In turn, I am going to make one virtual environment (venv) so I can install genienlp and anything else Python specific.

I seem to need a developer key. I can’t do the following
thingpedia download-string-values -d parameter-datasets/ --manifest parameter-datasets.tsv --append-manifest. I’m looking for the documentation about getting the developer key.

As for configuring Almond to talk to a local model, I recommend using almond-server for this use case. almond-server is configured using environment variables. Set THINGENGINE_NLP_URL in your environment to a file:/// URL containing the absolute path of a trained model, or a http:// URL pointing to a running genie server serving the model.
Other commands in Genie can also be pointed to a local model, typically by passing a file:/// URL in place of a model HTTP URL.

I’ll assume if I get the developer key in place, I can follow the remainder of the steps in genie-toolkit/doc/tutorial-basic.md ? That said, relative to the genie-toolkit main directory, where should the trained model be? The ‘data’ directory? embeddings? Does it have an extension (i.e., tsv?)

Cheers,
Andrew

I currently have three different directories. One for Almond-server, one for Thingpedia, and one for genie. I am going to restart and put these directories under one directory “almond-development.” In turn, I am going to make one virtual environment (venv) so I can install genienlp and anything else Python specific.

Apologies for the confusion. There are several repositories in play here:

  • genie-toolkit is the core conversational AI of Almond. It’s a library that applications can use to build virtual assistant. It also includes some command line tools to build and deploy semantic parsing models, as you found.
  • thingtalk, thingpedia-api (aka thingpedia in npm) are libraries that genie-toolkit uses. You don’t use these directly, and most likely you won’t need to clone them: npm takes care of that.
  • almond-server, almond-cloud are the actual Almond applications: standalone nodejs applications that provide a voice assistant using Genie technology
  • thingpedia-common-devices is a collection of useful virtual assistant skills that we developed
    Hope this clarifies!

I seem to need a developer key. I can’t do the following
thingpedia download-string-values -d parameter-datasets/ --manifest parameter-datasets.tsv --append-manifest. I’m looking for the documentation about getting the developer key.

You get a developer key by creating a developer account at almond.stanford.edu. From top left -> Settings -> Apply to be a developer.
(Some repos are by default configured to talk to almond-dev.stanford.edu instead. You cannot, at this time, get a developer key for that server because that’s our internal development server.)

I’ll assume if I get the developer key in place, I can follow the remainder of the steps in genie-toolkit/doc/tutorial-basic.md ? That said, relative to the genie-toolkit main directory, where should the trained model be? The ‘data’ directory? embeddings? Does it have an extension (i.e., tsv?)

If you follow that tutorial, when you reach the training step you’ll specify the output directory where the model should be. A genienlp trained model is a directory containing several files, the most important of which is called “best.pth”.
You can also download previously trained models from our releases page: https://wiki.almond.stanford.edu/releases