pyDataverse is a Python module for Dataverse. It helps to access the Dataverse API's and manipulate, validate, import and export all Dataverse data-types (Dataverse, Dataset, Datafile).
Find out more: Read the Docs
Background Support for direct upload of datafiles using Python is available via the following standalone script related to the Harvard Dataverse Repository: dataverse.harvard.edu/util/python/direct-upload/directupload.py
This script enables users to upload many datafiles and their associated metadata all at once before requesting reindexing, rather than calling the API for each file resulting in a system performance hit due to frequent reindexing.
Request & Rationale Incorporating this functionality into pyDataverse would benefit Dataverse API users and pyDataverse users at all installations who need to upload large numbers of datafiles.
Even though I set "replace" = True. I get:
You may not add data to a field that already has data and does not allow multiples. Use is_replace=true to replace existing data.
I even printed out the params with the result:{'replace': True}
Thank you for your contribution!
It's great, that you want contribute to pyDataverse.
First, start by reading the Bug reports, enhancement requests and other issues section.
Before moving on, please check some things first:
pip freeze
).[Explain the reason for your issue]
Dear pyDataverser developers, I am running pyDataverse to upload dataset to our Dataverse installation. Our dataset contains custom metadata ("freeKeywordValue")
As recommended in the tutorial, I create the dataset with "ds.from_json(data_json)" (the example is at the end of this message). However, the upload fails and when I check with "ds.get()", I see that the custom metadata fields fields are not there anymore. As well the server rejects the input, because pyDataverse removes the entry "metadataLanguage": "en" which is required by Dataverse as stated in the dataverse tutorial https://guides.dataverse.org/en/latest/api/native-api.html#id43.
If I use a simple CURL, my json is accepted perfectly and the dataset is inserted as expected.
The original json: ```` { "metadataLanguage": "en", "datasetVersion": { "metadataBlocks": { "citation": { "fields": [ { "value": "Youth in Austria 2005", "typeClass": "primitive", "multiple": false, "typeName": "title" }, { "value": [ { "authorName": { "value": "LastAuthor1, FirstAuthor1", "typeClass": "primitive", "multiple": false, "typeName": "authorName" }, "authorAffiliation": { "value": "AuthorAffiliation1", "typeClass": "primitive", "multiple": false, "typeName": "authorAffiliation" } } ], "typeClass": "compound", "multiple": true, "typeName": "author" }, { "value": [ { "datasetContactEmail": { "typeClass": "primitive", "multiple": false, "typeName": "datasetContactEmail", "value": "[email protected]" }, "datasetContactName": { "typeClass": "primitive", "multiple": false, "typeName": "datasetContactName", "value": "LastContact1, FirstContact1" } } ], "typeClass": "compound", "multiple": true, "typeName": "datasetContact" }, { "value": [ { "dsDescriptionValue": { "value": "DescriptionText", "multiple": false, "typeClass": "primitive", "typeName": "dsDescriptionValue" } } ], "typeClass": "compound", "multiple": true, "typeName": "dsDescription" }, { "value": [ { "freeKeywordValue": { "value": "MyKeyword1", "multiple": false, "typeClass": "primitive", "typeName": "freeKeywordValue" }
},
{
"freeKeywordValue": {
"value": "MyKeyword2",
"multiple": false,
"typeClass": "primitive",
"typeName": "freeKeywordValue"
}
}
],
"typeClass": "compound",
"multiple": true,
"typeName": "freeKeyword"
},
{
"value": [
"Medicine, Health and Life Sciences"
],
"typeClass": "controlledVocabulary",
"multiple": true,
"typeName": "subject"
}
],
"displayName": "Citation Metadata"
}
}
} }
````
The output of ds.get:
{'citation_displayName': 'Citation Metadata', 'title': 'Youth in Austria 2005', 'author': [{'authorName': 'LastAuthor1, FirstAuthor1', 'authorAffiliation': 'AuthorAffiliation1'}], 'datasetContact': [{'datasetContactEmail': '[email protected]', 'datasetContactName': 'LastContact1, FirstContact1'}], 'dsDescription': [{'dsDescriptionValue': 'DescriptionText'}], 'subject': ['Medicine, Health and Life Sciences']}
The incoming json in the server log:
{
"datasetVersion": {
"metadataBlocks": {
"citation": {
"fields": [
{
"typeName": "subject",
"multiple": true,
"typeClass": "controlledVocabulary",
"value": [
"Medicine, Health and Life Sciences"
]
},
{
"typeName": "title",
"multiple": false,
"typeClass": "primitive",
"value": "Youth in Austria 2005"
},
{
"typeName": "author",
"multiple": true,
"typeClass": "compound",
"value": [
{
"authorName": {
"typeName": "authorName",
"typeClass": "primitive",
"multiple": false,
"value": "LastAuthor1, FirstAuthor1"
},
"authorAffiliation": {
"typeName": "authorAffiliation",
"typeClass": "primitive",
"multiple": false,
"value": "AuthorAffiliation1"
}
}
]
},
{
"typeName": "datasetContact",
"multiple": true,
"typeClass": "compound",
"value": [
{
"datasetContactEmail": {
"typeName": "datasetContactEmail",
"typeClass": "primitive",
"multiple": false,
"value": "[email protected]"
},
"datasetContactName": {
"typeName": "datasetContactName",
"typeClass": "primitive",
"multiple": false,
"value": "LastContact1, FirstContact1"
}
}
]
},
{
"typeName": "dsDescription",
"multiple": true,
"typeClass": "compound",
"value": [
{
"dsDescriptionValue": {
"typeName": "dsDescriptionValue",
"typeClass": "primitive",
"multiple": false,
"value": "DescriptionText"
}
}
]
}
],
"displayName": "Citation Metadata"
}
}
}
}
Hi, recently our institution updated Dataverse Version to '5.11.1'. Previously, the metadata Kind of Data wasn't mandatory for our institution, but now it is, and it appears with and dropdow with some options.
I am trying to create a dataset with Pydataverse with this code:
from pyDataverse.models import Dataset ds = Dataset() ds_filename = "dataset.json" ds.from_json(read_file(ds_filename)) ds.validate_json() resp = api.create_dataset("pyDataverse_user-guide", ds.json()) resp.json()
But i got this error: {'status': 'ERROR', 'message': 'Error parsing Json: incorrect typeClass for field kindOfData, should be controlledVocabulary'}
I attached the json I am using
How can I delete specific datafile? How can I replace a data file?
from pyDataverse.api import NativeApi, DataAccessApi from pyDataverse.models import Dataverse
base_url = '' token= '' api = NativeApi(base_url,token) data_api = DataAccessApi(base_url,token)
DOI= " " dataset = api.get_dataset(DOI) dictmetadata=dataset.json() dictmetadata['data']['latestVersion']['metadataBlocks']['citation']['fields'][0]['value']='new title'
import json jsonStr = json.dumps(dictmetadata)
I get as response [500] and the title isn't changed. How could i fix it? And how would it be with a Json file. Thanks
Small bugfix of #126.
For help or general questions please have a look in our Docs or email [email protected]
Thanks to Karin Faktor for finding the bug.
PyDataverse is supported by AUSSDA and by funding as part of the Horizon2020 project SSHOC.
This release is a big change in many parts of the package. It adds new API's, re-factored models and lots of new documentation.
Overview of the most important changes:
get_children()
jsonschemas
required)update_datafile()
Version 0.3.0 is named in honor of Ruth Wodak (Wikipedia), an Austrian linguist. Her work is mainly located in discourse studies, more specific in critical discourse analysis, which looks at discourse as a form of social practice. She was awarded with the Wittgenstein-Preis, the highest Austrian science award.
For help or general questions please have a look in our Docs or email [email protected]
The new functionalities were developed with some specific use-cases in mind:
See more detailed in our Documentation.
Retrieve data structure and metadata from Dataverse instance (DevOps)
Collect all Dataverses, Datasets and Datafiles of a Dataverse instance, or just a part of it. The results then can be stored in JSON files, which can be used for testing purposes, like checking the completeness of data after a Dataverse upgrade or migration.
Upload and removal of test data (DevOps)
For testing, you often have to upload a collection of data and metadata, which should be removed after the test is finished. For this, we offer easy to use functionalities.
Import data from CSV templates (Data Scientist)
Importing lots of data from data sources outside dataverse can be done with the CSV templates as a bridge. Fill the CSV templates with your data, by machine or by human, and import them into pyDataverse for an easy mass upload via the Dataverse API.
long_description_content_type
(#4)Summary: Add other API's next to Native API and update Native API.
get_datafile()
, get_datafiles()
, get_datafile_bundle()
)request_access()
, allow_access_request()
, grant_file_access()
, list_file_access_requests()
)total()
, past_days()
, get_dataverses_by_subject()
, get_dataverses_by_category()
, get_datasets_by_subject()
, get_datasets_by_data_location()
get_service_document()
search()
get_children()
)dataverse_id2alias()
)get_dataverse_contents()
)get_dataverse_assignments()
)get_dataverse_facets()
)edit_dataset_metadata()
) (#19)destroy_dataset()
)create_dataset_private_url()
, get_dataset_private_url()
, delete_dataset_private_url()
)get_dataset_versions()
, get_dataset_version()
)get_dataset_assignments()
)get_dataset_lock()
)get_datafiles_metadata()
update_datafile_metadata()
)redetect_file_type()
)restrict_datafile()
)reingest_datafile()
, uningest_datafile()
)upload_datafile()
)replace_datafile()
get_dataverse_roles()
, create_role()
, show_role()
, delete_role()
)get_user_api_token_expiration_date()
, recreate_user_api_token()
, delete_user_api_token()
)get_user()
) (#59)get_info_api_terms_of_use()
)create_dataset()
(#3)upload_datafile()
)pydataverse
auth
parameter used anymore.Summary: Re-factoring of all models (Dataverse, Dataset, Datafile).
New methods:
from_json()
imports JSON (like Dataverse's own JSON format) to pyDataverse models objectget()
returns a dict of the pyDataverse models objectjson()
returns a JSON string (like Dataverse's own JSON format) of the pyDataverse models object. Mostly used for API uploads.validate_data()
validates a pyDataverse object with a JSON schemawrite_dicts_as_csv()
) (#11)get_children()
and extract Dataverses, Datasets and Datafiles (dataverse_tree_walker()
)dataverse_tree_walker()
in seperate JSON files (save_tree_data()
)validate_data()
)clean_string()
)create_dataverse_url()
, create_dataset_url()
, create_datafile_url()
)read_csv_to_dict()
: replace dv.
prefix, load JSON cells and convert boolean cell stringsMany new pages and tutorials:
Thanks to Daniel Melichar (@dmelichar), Vyacheslav Tykhonov (Slava), GDCC, @ecowan, @BPeuch, @j-n-c and @ambhudia for their support for this release. Special thanks to the Pandas project for their great blueprint for the Contributor Guide.
PyDataverse is supported by funding as part of the Horizon2020 project SSHOC.
This release fixes a bug in the Dataset.dict()
generation.
For help or general questions please have a look in our Docs or email [email protected]
series
, socialScienceNotes
and targetSampleSize
caused error in Dataset.dict()
, cause the contained sub-values were stored directly in own class-attributes.To find out how you can contribute, please have a look at the Contributor Guide. No contribution is too small!
The most important contribution you can make right now is to use the module. It would be great, if you install it, run some code on your PC and access your own Dataverse instance if possible - and give feedback after it (contact).
pyDataverse includes a collection of functionalities to import, export and manipulate data and it's metadata via the Dataverse API.
-- Greetz, Stefan Kasberger
This release adds functionalities to import, manipulate and export the metadata of Dataverses, Datasets and Datafiles.
Version 0.2.0 is named in honor of Ida Pfeiffer (Wikipedia), an Austrian traveler and travel book author. She went on for several travels around the world, where she collected plants, insects, mollusks, marine life and mineral specimens and brought most of them back home to the Natural History Museum of Vienna.
For help or general questions please have a look in our Docs or email [email protected]
dict()
Api()
(PR #8)dict()
for automatic import of datasets into a Dataset()
objecttox.ini
requests>=2.12.0
or newer needed From 18th to 22nd of June 2019, pyDataverse's main developer Stefan Kasberger will be at the Dataverse Community Conference in Cambridge, MA to exchange with others about pyDataverse end develop it further. If you are interested and around, drop by and join us. If you can not attend, you can connect with us via Dataverse Chat.
To find out how you can contribute, please have a look at the Contributor Guide. No contribution is too small!
The most important contribution you can make right now is to use the module. It would be great, if you install it, run some code on your PC and access your own Dataverse instance if possible - and give feedback after it (contact).
Another way is, to share this release with others, who could be interested (e. g. retweet my Tweet, or send an Email).
pyDataverse includes a collection of functionalities to import, export and manipulate data and it's metadata via the Dataverse API.
https://twitter.com/stefankasberger/status/1140832352517668864
Thanks to Ajax23 for the PR #8. Great contribution, and it's always amazing to see the idea of Open Source in action. :)
-- Greetz, Stefan Kasberger
This release is a quick bugfix. It adds requests to the install_requirements and updates the packaging and testing configuration.
For help or general questions please have a look in our Docs or email [email protected]
requests
to the install_requires
in setup.py
setup.py
tools/tests-requirements.txt
tox.ini
: add python versions, add dist test, add pypitest test, clean up and re-structure configurationTo find out how you can contribute, please have a look at the Contributor Guide. No contribution is too small!
The most important contribution right now is simply to use the module. It would be great, if you install it, run some code on your PC and access your own Dataverse instance if possible - and give feedback after it (contact).
pyDataverse includes the most basic data operations to import and export data via the Dataverse API. The functionality will be expanded in the next weeks with more requests and a class-based data model for the metadata. This will allow to easily import and export metadata, and upload it directly to the API.
Thanks to @moumenuisawe for mentioning this bug.
-- Greetz, Stefan Kasberger
This release is the initial, first one of pyDataverse. It offers basic features to access the Dataverse API via Python, to create, retrieve, publish and delete Dataverses, Datasets and Datafiles.
Version 0.1.0 is named in honor of Marietta Blau (Wikipedia), an Austrian researcher in the field of particle physics. In 1950, she was nominated for the Nobel prize for her contributions.
For help or general questions please have a look in our Docs or email [email protected]
api.py
:utils.py
: File IO and data conversion functionalities to support API operationsexceptions.py
: Custom exceptionstests/*.py
: Tests with test data in pytest, tested with tox on travis ci.To find out how you can contribute, please have a look at the Contributor Guide. No contribution is too small!
The most important contribution right now is simply to use the module. It would be great, if you install it, run some code on your PC and access your own Dataverse instance if possible - and give feedback after it (contact).
Another way is, to share this release with others, who could be interested (e. g. retweet my Tweet, or send an Email).
pyDataverse includes the most basic data operations to import and export data via the Dataverse API. The functionality will be expanded in the next weeks with more requests and a class-based data model for the metadata. This will allow to easily import and export metadata, and upload it directly to the API.
Thanks to dataverse-client-python, for being the main orientation and input for the start of pyDataverse. Also thanks to @kaczmirek, @pdurbin, @djbrooke and @4tikhonov for their support on this.
-- Greetz, Stefan Kasberger
GDCC uses Github to coordinate community contributions to Dataverse and to manage develop of software and documentation that extend or interact with Dataverse.
GitHub Repository Homepagedataverse api python