This repo contains resources for participants of the Quantium GenAI Hackathon
Peer into the future of business solutions on the cloud!
This is an opportunity to experiment with Generative AI on Google Cloud, and build new solutions to real business problems.
Below are the links to your team's project:
- AI Artisans
- AI Billionaires
- Artificial Appetite
- Automating My Job Away
- Bing Chillin
- Chat PNC
- Checkout Max Ultra Pro
- Classy Tendies
- CWC
- Data Dreamers
- eNGS
- FatGPT
- GENius and Inspiring
- Grad Engineers and Eric
- HackAIholics
- KISS AI
- LINNAEAN
- Maui
- New Chicago
- PII AI
- Q-Smart
- Rishy and the Dutty
- SASQUAD
- S.M.A.R.T
- Team Hamburger
- The Q RedWinery
- Walnuts
- wiq.SAGA
- wiq.Trends
Each team recieves a dedicated Google Cloud project <team-name-random-suffix>
, with the Project Owner permission.
By default, the following servers/APIs are enabled:
- "aiplatform.googleapis.com"
- "artifactregistry.googleapis.com"
- "bigquery.googleapis.com"
- "compute.googleapis.com"
- "cloudbuild.googleapis.com"
- "cloudfunctions.googleapis.com"
- "datacatalog.googleapis.com"
- "dataflow.googleapis.com"
- "datastudio.googleapis.com"
- "dlp.googleapis.com"
- "eventarc.googleapis.com"
- "logging.googleapis.com"
- "sourcerepo.googleapis.com"
- "run.googleapis.com"
- "pubsub.googleapis.com"
- "monitoring.googleapis.com"
- "notebooks.googleapis.com"
- "eventarcpublishing.googleapis.com"
- "storage.googleapis.com"
To run the following commands in your browser, open Cloud Shell through the Google Cloud Console.
gcloud init
will initialise, authorise and configure the gcloud CLIgcloud auth login
will authorise access to the gcloud CLI for your current accountgcloud config set project <PROJECT_ID>
will set the default project to work on
To run these commands locally, install the gcloud CLI.
You can manage your Vertex AI Entities through the gcloud CLI. For example:
gcloud ai operations describe 1234 --project=example --region=us-central1
will describe the operation1234
from projectexample
in the regionus-central1
gcloud ai models upload --container-image-uri="gcr.io/example/my-image" --display-name=my-model --project=example --region=us-central1
will upload the modelmy-image
under projectexample
in the regionus-central1
For a full list of Vertex AI commands, see here.
Each project has quotas to restrict your consumption of shared resources, including hardware, software and network components.
These quotas are measured per region. This means that your project could have 60 requests per minute in one region, and 60 requests per minute in another supported region.
Request Quota | Value |
---|---|
base_model:chat-bison requests per minute | 60 |
base_model:code-bison, which includes codechat-bison, requests per minute | 15 |
base_model:code-gecko requests per minute | 15 |
base_model:text-bison requests per minute | 60 |
base_model:textembedding-gecko requests per minute | 600 |
Resource management requests* per minute | 600 |
Job or long-running operation requests per minute | 60 |
Online prediction requests per minute+ | 30,000 |
Online prediction request throughput per minute | 1.5 GB |
Online explanation requests per minute | 600 |
Vertex AI Vizier requests per minute | 6,000 |
Vertex AI Feature Store online serving requests per minute | 300,000 |
Vertex ML Metadata requests per minute | 12,000 |
* Resource management requests include any request that is not a job, long-running operation, online prediction request, or Vertex AI Vizier request.
+ This quota applies for public endpoints only. Private endpoints have unlimited requests per minute.
Additional information and resources are available at the links below: