An efficient way to apply deep learning methods to image, text, and audio data.
DLPy is a high-level Python library for the SAS Deep learning features available in SAS Viya. DLPy is designed to provide an efficient way to apply deep learning methods to image, text, and audio data. DLPy APIs created following the Keras APIs with a touch of PyTorch flavor.
pip install swat
or conda install -c sas-institute swat
pip install sas-dlpy
or conda install -c sas-institute sas-dlpy
DLPy versions are aligned with the SAS Viya and VDMML versions. Below is the versions matrix.
DLPy | SAS Viya | VDMML |
---|---|---|
1.2.x | 3.5 | 8.5 |
1.1.x | 3.4 | 8.4 |
1.0.x | 3.4 | 8.3 |
The table above can be read as follows: DLPy versions between 1.0 (inclusive) to 1.1 (exclusive) are designed to work with the SAS Viya 3.4 and VDMML 8.3.
The following versions of external libraries are supported: - ONNX: versions >= 1.5.0 - Keras: versions >= 2.1.3
To connect to a SAS Viya server, import SWAT and use the swat.CAS class to create a connection:
Note: The default CAS port is 5570.
>>> import swat
>>> sess = swat.CAS('mycloud.example.com', 5570)
Next, import the DLPy package, and then build a simple convolutional neural network (CNN) model.
Import DLPy model functions:
>>> from dlpy import Model, Sequential
>>> from dlpy.layers import *
Use DLPy to create a sequential model and name it Simple_CNN
:
>>> model1 = Sequential(sess, model_table = 'Simple_CNN')
Define an input layer to add to model1
:
# The input shape contains RGB images (3 channels)
# The model images are 224 px in height and 224 px in width
>>> model1.add(InputLayer(3,224,224))
NOTE: Input layer added.
Add a 2D convolution layer and a pooling layer:
# Add 2-Dimensional Convolution Layer to model1
# that has 8 filters and a kernel size of 7.
>>> model1.add(Conv2d(8,7))
NOTE: Convolutional layer added.
# Add Pooling Layer of size 2
>>> model1.add(Pooling(2))
NOTE: Pooling layer added.
Add an additional pair of 2D convolution and pooling layers:
# Add another 2D convolution Layer that has 8 filters and a kernel size of 7
>>> model1.add(Conv2d(8,7))
NOTE: Convolutional layer added.
# Add a pooling layer of size 2 to # complete the second pair of layers.
>>> model1.add(Pooling(2))
NOTE: Pooling layer added.
Add a fully connected layer:
# Add Fully-Connected Layer with 16 units
>>> model1.add(Dense(16))
NOTE: Fully-connected layer added.
Finally, add the output layer:
# Add an output layer that has 2 nodes and uses
# the Softmax activation function
>>> model1.add(OutputLayer(act='softmax',n=2))
NOTE: Output layer added.
NOTE: Model compiled successfully
Have something cool to share? SAS gladly accepts pull requests on GitHub! See the Contributor Agreement for details.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at LICENSE.txt
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
i have project where i have to predict multiple keywords based on description of a movie.
I am able to train the bert model with 1 (binary target) , but I have train the model with multiple (431) binary targets. I got an error when I try to load the weight to cas. parameter:
bert.load_weights('/opt/sas/viya/config/data/cas/default/public/bert-base-uncased.kerasmodel.h5' , num_target_var=num_tgt_var, # Do not freeze base model weights. # Allow layer updates with model tuning. freeze_base_model=False )
error: ERROR: Target sequence for text input is not supported. ERROR: The action stopped due to errors.
DLPyError Traceback (most recent call last)
Input In [17], in
File ~/python-dlpy/dlpy/transformers/bert_model.py:967, in BERT_Model.load_weights(self, path, num_target_var, freeze_base_model, use_gpu, last_frozen_layer)
964 data_spec = self.get_data_spec(num_target_var)
966 # attach layer weights
--> 967 super(BERT_Model, self).load_weights(path,
968 data_spec=data_spec,
969 use_gpu=use_gpu,
970 embedding_dim=self._config['hidden_size'])
972 # determine which layers to freeze
973 self._freeze_layers = self._rnn_layer
File ~/python-dlpy/dlpy/network.py:1003, in Network.load_weights(self, path, labels, data_spec, label_file_name, label_length, use_gpu, embedding_dim) 1000 self.load_weights_from_caffe(path, labels=labels, data_spec=data_spec, label_file_name=label_file_name, 1001 label_length=label_length) 1002 elif file_name.lower().endswith('kerasmodel.h5'): -> 1003 self.load_weights_from_keras(path, labels=labels, data_spec=data_spec, label_file_name=label_file_name, 1004 label_length=label_length, use_gpu=use_gpu, embedding_dim=embedding_dim) 1005 elif file_name.lower().endswith('onnxmodel.h5'): 1006 self.load_weights_from_keras(path, labels=labels, data_spec=data_spec, label_file_name=label_file_name, 1007 label_length=label_length, use_gpu=use_gpu, embedding_dim=embedding_dim)
File ~/python-dlpy/dlpy/network.py:1077, in Network.load_weights_from_keras(self, path, labels, data_spec, label_file_name, label_length, use_gpu, embedding_dim) 1073 self.load_weights_from_file_with_labels(path=path, format_type='KERAS', data_spec=data_spec, 1074 label_file_name=label_file_name, label_length=label_length, 1075 use_gpu=use_gpu, embedding_dim=embedding_dim) 1076 else: -> 1077 self.load_weights_from_file(path=path, format_type='KERAS', data_spec=data_spec, use_gpu=use_gpu, 1078 embedding_dim=embedding_dim)
File ~/python-dlpy/dlpy/network.py:1187, in Network.load_weights_from_file(self, path, format_type, data_spec, use_gpu, embedding_dim) 1185 for msg in rt.messages: 1186 print(msg) -> 1187 raise DLPyError('Cannot import model weights, there seems to be a problem.') 1189 # create attributes if necessary 1190 if not has_data_spec:
DLPyError: Cannot import model weights, there seems to be a problem.
modelPath = '/srv/nfs/kubedata/cas-landingzone/sbxtot/sounds'
s.addcaslib(activeonadd=True,datasource={'srctype':'path'},name='mycaslib',path=modelPath,subdirectories=True) s.setsessopt(caslib='mycaslib')
local_data_test = '/srv/nfs/kubedata/cas-landingzone/sbxtot/sounds/test' server_data_test = '/srv/nfs/kubedata/cas-landingzone/sbxtot/sounds/test' local_data_train = '/srv/nfs/kubedata/cas-landingzone/sbxtot/sounds/train' server_data_train = '/srv/nfs/kubedata/cas-landingzone/sbxtot/sounds/train' local_data_validate = '/srv/nfs/kubedata/compute-landingzone/sbxtot/sounds/validate' server_data_validate = 'srv/nfs/kubedata/compute-landingzone/sbxtot/sounds/validate'
audio_table_train = AudioTable.load_audio_files(s,
casout={'name':'audio_files_train','replace':'True'},
local_audio_path=local_data_train,
server_audio_path=server_playground_data)
I can't understand why this procedure can't process those files and constantly throw that error:
Cannot convert file /srv/nfs/kubedata/cas-landingzone/sbxtot/sounds/train/abnormal/00000110.wav Number of files processed: 0 File conversions are finished. ERROR: A PATH type caslib is required when using filetype DOCUMENT. ERROR: The action stopped due to errors. ERROR: A PATH type caslib is required when using filetype DOCUMENT. ERROR: The action stopped due to errors.
While in the meantime, the following procedure works and create my table but it includes all the subdirectories of sounds (test, train, validate) content in one file
rt2 = s.retrieve('table.loadtable', _messagelevel='error', casout= {'name':'audio_files_train','replace':'True'}, caslib='mycaslib', path='', importOptions=dict(fileType='AUDIO', contents=True, recurse=True))
Can you let me know how to solve this issue ? Thanks, Tom.
Hi,
When you try to export a model in ONNX format, the deploy method expects the InputLayer parameters height and n_channels not to be None, even though they are meant to be used only with image data.
It would be nice not to have to specify dummy values for them when training RNNs and DNNs for example.
Best regards, Victor
If I deploy a model to table using, model.deploy(path='/path/to/dir', output_format='table') it produces 3 tables
modelname.sashdat modelname_weights.sashdat modelname_weights_attr.sashdat
If I try to load these tables with
s.table.loadTable it will complain that it can't find modelname.ATTRS.sashdat , if I rename the attributes table to what it expects it will work.
modeldetect = s.table.loadTable(caslib='scz', path='detectmobile/Model_M43t70.sashdat', loadattrs=True) modeldetectname = s.table.loadTable(caslib='scz',casout={'name':'mobiledetect'}, path='detectmobile/Model_M43t70_weights.sashdat')
Hi,
I've been trying to execute the create_object_detection_table function with no luck so far. I am using only the data_path option as iamges and cas server are in the same machine.
Attached the error I am receiving. I really don't get why it is looking for a object_x file or path.
Any ideas?
Thanks!!
Matteo
The issue relates to implementing DepthwiseConv2d, SeparableConv2d, PointwiseConv2d.
Highlights Include:
The latest DLPy release can be installed using the following pip command:
pip install sas-dlpy
Or, if you use Anaconda:
conda install -c sas-institute sas-dlpy
Highlights Include:
The latest DLPy release can be installed using the following pip command:
pip install sas-dlpy
Or, if you use Anaconda:
conda install -c sas-institute sas-dlpy
Highlights Include:
The latest DLPy release can be installed using the following pip command:
pip install sas-dlpy
Or, if you use Anaconda:
conda install -c sas-institute sas-dlpy
Highlights Include:
The latest DLPy release can be installed using the following pip command:
pip install sas-dlpy
Or, if you use Anaconda:
conda install -c sas-institute sas-dlpy
Highlights Include:
The latest DLPy release can be installed using the following pip command:
pip install sas-dlpy
Or, if you use Anaconda:
conda install -c sas-institute sas-dlpy
Highlights Include:
The latest DLPy release can be installed using the following pip command:
pip install sas-dlpy
Or, if you use Anaconda:
conda install -c sas-institute sas-dlpy