Some models defined with Cog to show you how it works

replicate, updated ๐Ÿ•ฅ 2022-05-17 01:56:48

Cog example models

This repo contains example machine learning models you can use to try out Cog.

Once you've got a working model and want to publish it so others can see it in action, check out replicate.com/docs.

Examples in this repo

Real world examples

The models in this repo are small and contrived. Here are a few real-world examples:

  • https://github.com/andreasjansson/pretrained-gan-70s-scifi
  • https://github.com/minzwon/sota-music-tagging-models
  • https://github.com/orpatashnik/StyleCLIP
  • https://github.com/andreasjansson/InstColorization (PR)
  • https://github.com/andreasjansson/SRResCGAN/tree/cog-config

Support

Having trouble getting a model working? Let us know and we'll help. If you encountered a problem with Cog, you can file a GitHub issue. Otherwise chat with us in Discord or send us an email at [email protected].

Issues

Got error for gpu: True setting

opened on 2023-03-12 01:45:14 by hiwaveSupport

cod.yaml build: gpu: true python_version: "3.8" python_packages: - "numpy" - "pandas" - "pillow==9.2.0" - "torch==1.13.1" - "transformers" - "pysbd"
predict: "predict.py:Predictor"

Getting Error from docker below `Building Docker image from environment in cog.yaml... [+] Building 394.3s (18/18) FINISHED
=> [internal] load build definition from Dockerfile 0.1s => => transferring dockerfile: 2.01kB 0.0s => [internal] load .dockerignore 0.1s => => transferring context: 2B 0.0s => resolve image config for docker.io/docker/dockerfile:1.2 0.3s => CACHED docker-image://docker.io/docker/dockerfile:[email protected]:e2a8561e419ab1ba6b2fe6cbdf49fd92b95912df1cf7d313c3e2230a333fdbcc 0.0s => [internal] load metadata for docker.io/nvidia/cuda:11.2.0-cudnn8-devel-ubuntu20.04 1.6s => [internal] load build context 0.1s => => transferring context: 40.50kB 0.0s => [stage-0 1/10] FROM docker.io/nvidia/cuda:[email protected]:764fb6d1fc3df959612037cd20908aaff62436ec51a0a2bf445df6bb94cd24e1 217.1s => => resolve docker.io/nvidia/cuda:[email protected]a256:764fb6d1fc3df959612037cd20908aaff62436ec51a0a2bf445df6bb94cd24e1 0.0s => => sha256:780f4c3f099464364ab2449fad3f5b68d46c5d11da2c6f48ec51e4b59aa5cde7 2.43kB / 2.43kB 0.0s => => sha256:eaead16dc43bb8811d4ff450935d607f9ba4baffda4fc110cc402fa43f601d83 28.58MB / 28.58MB 2.3s => => sha256:764fb6d1fc3df959612037cd20908aaff62436ec51a0a2bf445df6bb94cd24e1 743B / 743B 0.0s => => sha256:024b0a00097da87a217d9587198ed0028dbd89b999fdb754bc8af87420f2a925 15.22kB / 15.22kB 0.0s => => sha256:c261e48f49ff9aa520143aa2874a7285c27708f6614627b2d2b854401fcc3e58 7.93MB / 7.93MB 19.7s => => sha256:f7ab41eda8be2641b5de237727cbea10b05c74620e80c011447aba1bcd3c095a 11.04MB / 11.04MB 2.3s => => extracting sha256:eaead16dc43bb8811d4ff450935d607f9ba4baffda4fc110cc402fa43f601d83 0.4s => => sha256:b83c93effa812d146d35aebe893aa5e96297e339bff8bf21f307f672f5b1810d 6.43kB / 6.43kB 2.5s => => sha256:bb3b051d276e080821222bf451ce90d092070dc93d5869a137f00320937949c1 184B / 184B 2.7s => => sha256:8e60e5d0729267f93cdd1f945fad3cd20fd95b1bc645aa4bd4be4d401afc8bf1 1.04GB / 1.04GB 164.9s => => sha256:64f7e567a05ce3c8027ecabdfdb1510bfa54cbc3c08889fb34443ce3955e61b6 61.46kB / 61.46kB 3.2s => => sha256:bd1330614aec39f7a56a6f5f3777c7110a81284f512da78a0774e7c7f7fdfe3a 1.15GB / 1.15GB 173.2s => => extracting sha256:c261e48f49ff9aa520143aa2874a7285c27708f6614627b2d2b854401fcc3e58 0.1s => => sha256:f0cc2865d06f056e501f1574935d9c1e60027a3a874cdcac7ec529d404acdd05 84.21kB / 84.21kB 20.2s => => extracting sha256:f7ab41eda8be2641b5de237727cbea10b05c74620e80c011447aba1bcd3c095a 0.2s => => extracting sha256:bb3b051d276e080821222bf451ce90d092070dc93d5869a137f00320937949c1 0.0s => => sha256:6dfa9a68bf6ac64163457cc36a88bc071e086ec16dd8c3c15cc6da4ba04cbb0f 1.31GB / 1.31GB 202.6s => => extracting sha256:b83c93effa812d146d35aebe893aa5e96297e339bff8bf21f307f672f5b1810d 0.0s => => extracting sha256:8e60e5d0729267f93cdd1f945fad3cd20fd95b1bc645aa4bd4be4d401afc8bf1 8.5s => => extracting sha256:64f7e567a05ce3c8027ecabdfdb1510bfa54cbc3c08889fb34443ce3955e61b6 0.0s => => extracting sha256:bd1330614aec39f7a56a6f5f3777c7110a81284f512da78a0774e7c7f7fdfe3a 10.9s => => extracting sha256:f0cc2865d06f056e501f1574935d9c1e60027a3a874cdcac7ec529d404acdd05 0.0s => => extracting sha256:6dfa9a68bf6ac64163457cc36a88bc071e086ec16dd8c3c15cc6da4ba04cbb0f 13.4s => [stage-0 2/10] RUN rm -f /etc/apt/sources.list.d/cuda.list && rm -f /etc/apt/sources.list.d/nvidia-ml.list && apt-key del 7fa2af80 0.6s => [stage-0 3/10] RUN --mount=type=cache,target=/var/cache/apt set -eux; apt-get update -qq; apt-get install -qqy --no-install-recommends curl; rm -rf /var/lib/a 10.7s => [stage-0 4/10] RUN --mount=type=cache,target=/var/cache/apt apt-get update -qq && apt-get install -qqy --no-install-recommends make build-essential libssl- 41.1s => [stage-0 5/10] RUN curl -s -S -L https://raw.githubusercontent.com/pyenv/pyenv-installer/master/bin/pyenv-installer | bash && git clone https://github.com/mo 62.7s => [stage-0 6/10] COPY .cog/tmp/build1447658136/cog-0.0.1.dev-py3-none-any.whl /tmp/cog-0.0.1.dev-py3-none-any.whl 0.1s => [stage-0 7/10] RUN --mount=type=cache,target=/root/.cache/pip pip install /tmp/cog-0.0.1.dev-py3-none-any.whl 8.3s => [stage-0 8/10] COPY .cog/tmp/build1447658136/requirements.txt /tmp/requirements.txt 0.1s => [stage-0 9/10] RUN --mount=type=cache,target=/root/.cache/pip pip install -r /tmp/requirements.txt 39.4s => [stage-0 10/10] WORKDIR /src 0.1s => exporting to image 0.1s => => exporting layers 0.0s => => writing image sha256:e31584317928bf9bae132645d99209f486d15dba63106217e734f13dfc717f71 0.0s => => naming to docker.io/library/cog-cogtest-base 0.0s => exporting cache 0.0s => => preparing build cache for export 0.0s

Running 'bash' in Docker with the current directory mounted as a volume... docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]]. โ…น Docker is missing required device driver `

Notebook example broken - Jupyter command `jupyter-notebook` not found.

opened on 2022-11-09 03:58:48 by oscarnevarezleal

I stumbled into this error after run the notebook example. Not sure exactly what is wrong.

``` Building Docker image from environment in cog.yaml... $ docker buildx build --platform linux/amd64 --file - --build-arg BUILDKIT_INLINE_CACHE=1 --tag cog-notebook-base --progress auto . [+] Building 1.5s (13/13) FINISHED => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 590B 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => resolve image config for docker.io/docker/dockerfile:1.2 0.4s => CACHED docker-image://docker.io/docker/dockerfile:[email protected]:e2a8561e419ab1ba6b2fe6cbdf49fd92b95912df1cf7d313c3e2230a333fdbcc 0.0s => [internal] load metadata for docker.io/library/python:3.9 0.4s => [internal] load build context 0.0s => => transferring context: 31.75kB 0.0s => [stage-0 1/5] FROM docker.io/library/python:[email protected]:475fe86ebf1da48ea27009a8f7d7e96231af4142de918a68010d48d0abb9c9c5 0.0s => CACHED [stage-0 2/5] COPY .cog/tmp/build3517166673/cog-0.0.1.dev-py3-none-any.whl /tmp/cog-0.0.1.dev-py3-none-any.whl 0.0s => CACHED [stage-0 3/5] RUN --mount=type=cache,target=/root/.cache/pip pip install /tmp/cog-0.0.1.dev-py3-none-any.whl 0.0s => CACHED [stage-0 4/5] RUN --mount=type=cache,target=/root/.cache/pip pip install jupyterlab==3.2.4 0.0s => CACHED [stage-0 5/5] WORKDIR /src 0.0s => exporting to image 0.0s => => exporting layers 0.0s => => writing image sha256:45a9ad9948b152eb756fb39d04d2adc6a74946793ad3828d8a90f0d1fedb7207 0.0s => => naming to docker.io/library/cog-notebook-base 0.0s => exporting cache 0.0s => => preparing build cache for export 0.0s

Running 'jupyter notebook --allow-root --ip=0.0.0.0' in Docker with the current directory mounted as a volume... $ docker run --rm --shm-size 8G --interactive --publish 8888:8888 --tty --mount type=bind,source=/Users/onevarez/WebstormProjects/cog-examples/notebook,destination=/src --workdir /src cog-notebook-base jupyter notebook --allow-root --ip=0.0.0.0 WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested usage: jupyter [-h] [--version] [--config-dir] [--data-dir] [--runtime-dir] [--paths] [--json] [--debug] [subcommand]

Jupyter: Interactive Computing

positional arguments: subcommand the subcommand to launch

optional arguments: -h, --help show this help message and exit --version show the versions of core jupyter packages and exit --config-dir show Jupyter config dir --data-dir show Jupyter data dir --runtime-dir show Jupyter runtime dir --paths show all Jupyter paths. Add --json for machine-readable format. --json output paths as machine-readable json --debug output debug information about paths

Available subcommands: dejavu execute kernel kernelspec lab labextension labhub migrate nbclassic nbconvert run server troubleshoot trust

Jupyter command jupyter-notebook not found. โ…น exit status 1 ```

Get error when running cog build for resnet example

opened on 2022-06-01 00:11:00 by xiaoyu-work

I made this change locally to bypass paramspec error, but got another protobuf error:

``` Building Docker image from environment in cog.yaml as cog-resnet... [+] Building 3.0s (14/14) FINISHED => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 622B 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => resolve image config for docker.io/docker/dockerfile:1.2 0.1s => CACHED docker-image://docker.io/docker/dockerfile:[email protected]:e2a8561e419ab1ba6b2fe6cbdf49fd92b95912df1cf7d313 0.0s => [internal] load metadata for docker.io/library/python:3.8 0.1s => [internal] load build context 0.0s => => transferring context: 29.32kB 0.0s => [stage-0 1/6] FROM docker.io/library/python:[email protected]:f8dd6cc493bb667f693293f69927ae7c5ebf430a88b9d384c0c3ee 0.0s => CACHED [stage-0 2/6] COPY .cog/tmp/build3752096533/cog-0.0.1.dev-py3-none-any.whl /tmp/cog-0.0.1.dev-py3-none 0.0s => CACHED [stage-0 3/6] RUN --mount=type=cache,target=/root/.cache/pip pip install /tmp/cog-0.0.1.dev-py3-none-a 0.0s => CACHED [stage-0 4/6] RUN --mount=type=cache,target=/root/.cache/pip pip install pillow==9.1.0 tensorflow==2 0.0s => CACHED [stage-0 5/6] WORKDIR /src 0.0s => [stage-0 6/6] COPY . /src 0.1s => exporting to image 0.0s => => exporting layers 0.0s => => writing image sha256:034e64239bd4618f4272e0b719fd4263366a34a9a37eacdaf77139887c179680 0.0s => => naming to docker.io/library/cog-resnet 0.0s => exporting cache 0.0s => => preparing build cache for export 0.0s Adding labels to image...

Traceback (most recent call last): File "/usr/local/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/local/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.8/site-packages/cog/command/openapi_schema.py", line 18, in predictor = load_predictor() File "/usr/local/lib/python3.8/site-packages/cog/predictor.py", line 76, in load_predictor spec.loader.exec_module(module) File "", line 843, in exec_module File "", line 219, in _call_with_frames_removed File "predict.py", line 5, in from tensorflow.keras.applications.resnet50 import ( File "/usr/local/lib/python3.8/site-packages/tensorflow/init.py", line 37, in from tensorflow.python.tools import module_util as _module_util File "/usr/local/lib/python3.8/site-packages/tensorflow/python/init.py", line 37, in from tensorflow.python.eager import context File "/usr/local/lib/python3.8/site-packages/tensorflow/python/eager/context.py", line 29, in from tensorflow.core.framework import function_pb2 File "/usr/local/lib/python3.8/site-packages/tensorflow/core/framework/function_pb2.py", line 16, in from tensorflow.core.framework import attr_value_pb2 as tensorflow_dot_core_dot_framework_dot_attr__value__pb2 File "/usr/local/lib/python3.8/site-packages/tensorflow/core/framework/attr_value_pb2.py", line 16, in from tensorflow.core.framework import tensor_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__pb2 File "/usr/local/lib/python3.8/site-packages/tensorflow/core/framework/tensor_pb2.py", line 16, in from tensorflow.core.framework import resource_handle_pb2 as tensorflow_dot_core_dot_framework_dot_resource__handle__pb2 File "/usr/local/lib/python3.8/site-packages/tensorflow/core/framework/resource_handle_pb2.py", line 16, in from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2 File "/usr/local/lib/python3.8/site-packages/tensorflow/core/framework/tensor_shape_pb2.py", line 36, in _descriptor.FieldDescriptor( File "/usr/local/lib/python3.8/site-packages/google/protobuf/descriptor.py", line 560, in new _message.Message._CheckCalledFromGeneratedFile() TypeError: Descriptors cannot not be created directly. If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0. If you cannot immediately regenerate your protos, some other possible workarounds are: 1. Downgrade the protobuf package to 3.20.x or lower. 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).

More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates

โ…น Failed to get type signature: exit status 1 ```

I tried different versions of protobuf like 3.17, 3.20 or 4.21, and none of them worked. I also tried to set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python, still didn't work.

Fix paramspec error

opened on 2022-05-17 01:56:47 by bfirsh

Parallel of https://github.com/replicate/cog/pull/593

Update jupyter README.md

opened on 2022-04-28 10:48:33 by sacdallago

Super tiny change that removes friction for token differences

Replicate

Run machine learning models in the cloud

GitHub Repository Homepage