+91 9873530045
admin@learnwithfrahimcom
Mon - Sat : 09 AM - 09 PM

Step 1 — Google Cloud & Vertex AI Setup

Step 1 — Set Up Google Cloud & Vertex AI (Streamlit Chatbot Course)

Create a project, enable Vertex AI, set credentials, install Cloud SDK, and verify from PyCharm.

What we’ll accomplish
  • Create/choose a GCP project and enable Vertex AI API.
  • Create a Service Account (for deployments) and set up Application Default Credentials (for local dev).
  • Install Google Cloud SDK, authenticate, and verify from a short Python test in PyCharm.

1) Create a Google Cloud Project

  1. Go to Google Cloud Console and sign in with your Google account.
  2. Top navbar → Project dropdown → New Project. Name it (e.g., vertex-chat-streamlit).
  3. Make sure Billing is enabled for the project (free credits often available for new accounts).
[Screenshot placeholder: GCP Console → New Project dialog]
Region tip: We’ll use us-central1 in this course. Keep your services in the same region for fewer surprises.

2) Enable Required APIs

  1. Left menu → APIs & Services ▸ Library.
  2. Enable Vertex AI API.
  3. Also enable: Cloud Storage API (datasets, artifacts), IAM API (permissions).
[Screenshot placeholder: APIs & Services → Library → “Vertex AI API” → Enable]
Done? Your project can now call Vertex AI.

3) Create a Service Account (for deployments)

  1. Left menu → IAM & Admin ▸ Service AccountsCreate Service Account.
  2. Name: vertex-chatbot-sa.
  3. Roles: Vertex AI User. (Later for Cloud Run you’ll also need Service Account User on the runner identity.)
  4. Create → open the service account → Keys tab → Add Key ▸ Create new key → JSON → Download.
  5. Save as:
    Windows: C:\VertexChatbot\keys\sa.json
    macOS/Linux: /Users/you/VertexChatbot/keys/sa.json
Keep it secret. Don’t commit this JSON to GitHub. Use it locally and change if leaked.

4) Install Google Cloud SDK (CLI)

  1. Download & install from cloud.google.com/sdk.
  2. Open Terminal / CMD / PowerShell and run:
    gcloud init
    gcloud config set project YOUR_PROJECT_ID
    gcloud config set compute/region us-central1
    gcloud auth application-default login

gcloud auth application-default login sets up Application Default Credentials (ADC) on your machine—perfect for PyCharm & Streamlit during development.

You’re authenticated locally without hardcoding keys in your app.
You can also point to your service-account key (not recommended for dev):
set GOOGLE_APPLICATION_CREDENTIALS=C:\VertexChatbot\keys\sa.json (Windows)
export GOOGLE_APPLICATION_CREDENTIALS=/Users/you/VertexChatbot/keys/sa.json (macOS/Linux)

5) Verify Your Setup

CLI checks

gcloud auth list
gcloud config list
gcloud services list --enabled | findstr vertex   # Windows
gcloud services list --enabled | grep vertex      # macOS/Linux

Python check (run in PyCharm)

Install the library in your virtual env:

pip install --upgrade google-cloud-aiplatform streamlit python-dotenv

Create test_vertex.py with:

# test_vertex.py
# Quick smoke test for Vertex AI (Gemini) from your laptop/PyCharm.

import vertexai
from vertexai.generative_models import GenerativeModel

PROJECT_ID = "YOUR_PROJECT_ID"
LOCATION = "us-central1"

vertexai.init(project=PROJECT_ID, location=LOCATION)
model = GenerativeModel("gemini-1.5-pro")  # prebuilt LLM, no training needed
resp = model.generate_content("Say hello in five words.")
print("✅ Call OK. Model replied:")
print(resp.text)
Expected output (varies):
✅ Call OK. Model replied:
Hello there, nice to meet!
If you get a permission error:
  • Ensure the Vertex AI API is enabled.
  • Re-run gcloud auth application-default login and select the correct account.
  • Confirm the active project: gcloud config list.

6) Optional: Store Project Settings in .env for PyCharm

Create a .env file at your project root (don’t commit to public repos):

GCP_PROJECT_ID=YOUR_PROJECT_ID
GCP_LOCATION=us-central1
# For key-based auth (optional; prefer ADC in dev)
# GOOGLE_APPLICATION_CREDENTIALS=C:/VertexChatbot/keys/sa.json

Install python-dotenv (already in the earlier pip line) and load in code:

from dotenv import load_dotenv
import os, vertexai
from vertexai.generative_models import GenerativeModel

load_dotenv()
vertexai.init(
  project=os.getenv("GCP_PROJECT_ID"),
  location=os.getenv("GCP_LOCATION","us-central1")
)
model = GenerativeModel("gemini-1.5-pro")
print(model.generate_content("Environment variables loaded successfully.").text)
Now your PyCharm runs pick up config automatically.

What’s Next?

In Step 2 we’ll create the **Streamlit app shell** (UI), wire it to Vertex AI, and run locally. We will get a chat input box and a response area working end-to-end.

[Screenshot placeholder: Streamlit “Hello, Chatbot” page layout]
Streamlit + Vertex AI Chatbot Course — Step 1 of 10