Skip to main content

Frequently Asked Questions

Quick answers to common questions about AI coding assistants

Find answers to the most common questions about using AI tools for programming, from getting started to advanced usage.

Need More Help?

πŸš€ Getting Started​

Which AI coding tool should I choose?​

It depends on your needs:

Your SituationRecommended ToolWhy
Just learningDeepSeek (free) or ChatGPTFree, easy to use, good for basics
Professional developerGPT-4o or CursorBest balance of quality and speed
Large codebasesKimi K2 or ClaudeCan handle 1M+ tokens of context
Algorithms & mathDeepSeek or GPT-4oStrong reasoning capabilities
Production codeClaude (quality) or GPT-4o (speed)Highest code quality
Budget-consciousDeepSeekCompletely free, surprisingly good
China-basedDeepSeek or KimiBetter connectivity, Chinese support
IDE integrationCursor or GitHub CopilotSeamless coding experience

Quick Comparison:

Best Free: DeepSeek
Best Overall: GPT-4o
Best Quality: Claude 3 Opus
Best Value: Claude 3 Sonnet
Best for Beginners: ChatGPT
Best for Large Projects: Kimi K2

See our detailed tool comparison.


How do I get started with AI coding?​

5-minute setup:

  1. Choose a tool (start with ChatGPT - free)
  2. Create account at chat.openai.com
  3. Try a simple prompt:
    Write a Python function to calculate fibonacci numbers
    with memoization. Include docstring and example usage.
  4. Review the code - don't just copy-paste
  5. Test it in your environment

Next steps:


Is AI coding cheating?​

No! It's a modern development tool, just like:

  • Stack Overflow (searching for solutions)
  • IDE autocomplete (code suggestions)
  • Documentation (looking up APIs)
  • Code snippets (reusing patterns)

You still need to:

  • βœ… Understand the problem
  • βœ… Review the solution
  • βœ… Test thoroughly
  • βœ… Debug issues
  • βœ… Maintain the code
  • βœ… Explain it to your team

Analogy: Using a calculator isn't "cheating" at math. You still need to know what calculation to do and verify the result makes sense.

Reality: Developers using AI will replace those who don't.


Can I use AI for my job/school projects?​

Job: Usually YES, but:

  • βœ… Check your company policy
  • βœ… Don't share proprietary code with AI
  • βœ… Don't share customer data
  • βœ… You're responsible for code quality
  • βœ… Follow security best practices

School: It depends:

  • ❌ Likely not allowed for homework/exams (ask instructor)
  • βœ… Often okay for personal projects
  • βœ… Can use for learning concepts
  • ⚠️ Academic honesty policies vary

Pro tip: Be transparent. Ask your employer/instructor about their AI policy.


Will AI replace developers?​

Not anytime soon. Here's why:

AI is good at:

  • βœ… Generating boilerplate code
  • βœ… Explaining concepts
  • βœ… Debugging simple issues
  • βœ… Writing tests
  • βœ… Code transformations
  • βœ… Documentation

AI struggles with:

  • ❌ Understanding business requirements
  • ❌ System architecture decisions
  • ❌ Complex debugging
  • ❌ Stakeholder communication
  • ❌ Project management
  • ❌ Creative problem-solving
  • ❌ Code maintenance at scale

The future: AI is a power tool that amplifies developer capabilities. Developers who use AI effectively will be more valuable, not replaced.

Think of it like: Excel didn't replace accountants, it made them more productive.


πŸ’° Costs & Pricing​

How much does AI coding cost per month?​

Free Options:

  • DeepSeek: Completely free, unlimited
  • ChatGPT Free: Limited messages per day
  • Cursor Free: 50 requests per month
  • GitHub Copilot: Free for students/educators

Paid Subscriptions:

ServiceCostBest For
ChatGPT Plus$20/monthGeneral coding
Claude Pro$20/monthHigh-quality code
Cursor Pro$20/monthIDE integration
GitHub Copilot$10/monthIDE autocomplete
Kimi K2Β₯168/month (~$24)Large context

API (Pay-as-you-go):

  • Light usage: $5-20/month
  • Typical developer: $20-50/month
  • Heavy usage: $50-200/month

ROI: Most developers report 2-5x productivity boost. Even at $50/month, if you save 10 hours, that's worth $500-1000+ in saved time.


How can I reduce AI API costs?​

8 proven strategies:

1. Use Appropriate Models​

Simple tasks β†’ GPT-3.5 Turbo ($0.50/M tokens)
Complex logic β†’ GPT-4o ($5/M tokens)
Code review β†’ Claude Sonnet ($3/M tokens)

2. Enable Prompt Caching (Claude)​

  • Saves 90% on repeated context
  • Perfect for large codebases
  • See our caching guide (Coming in Phase 2)

3. Batch Requests​

# ❌ Expensive: 10 API calls
for item in items:
process(item) # 10 Γ— $0.01 = $0.10

# βœ… Cheaper: 1 API call
process_batch(items) # 1 Γ— $0.02 = $0.02

4. Set Token Limits​

response = client.chat.completions.create(
model="gpt-4o",
max_tokens=500, # Limit response length
messages=[...]
)

5. Use Streaming​

  • Stop if response goes off track
  • Saves tokens on bad responses

6. Track Usage​

from ai_usage_tracker import tracker

cost = tracker.track_request(model, input_tokens, output_tokens)
if tracker.today_total > 10.00:
print("⚠️ Daily budget exceeded!")

7. Start with Free Tier​

  • DeepSeek is surprisingly good
  • Test prompts on free models first
  • Only use paid models when needed

8. Cache Responses Locally​

  • Save common Q&A pairs
  • Reuse previous responses
  • Build a personal knowledge base

What's the difference between free and paid AI tools?​

FeatureFree (e.g., DeepSeek)Paid ($20/month)API (Usage-based)
AccessLimited/unlimitedUnlimitedPay per use
SpeedSlowerFasterFast
PriorityLowHighHighest
ModelOlder/smallerLatestChoose any
Usage LimitsDaily capNo capYour budget
FeaturesBasicAdvanced (plugins, etc.)Full control
Best ForLearningProfessional useAutomation

Recommendation:

  • Learning: Start with free (DeepSeek, ChatGPT free)
  • Professional: Get subscription ($20/month)
  • Automation/Scale: Use API with cost tracking

πŸ› οΈ Technical Setup​

How do I set up API keys?​

Step-by-step guide:

OpenAI (ChatGPT/GPT-4)​

1. Go to https://platform.openai.com/api-keys
2. Click "Create new secret key"
3. Copy key (starts with sk-...)
4. Set environment variable:

# Linux/Mac
echo 'export OPENAI_API_KEY="sk-your-key-here"' >> ~/.bashrc
source ~/.bashrc

# Windows PowerShell
setx OPENAI_API_KEY "sk-your-key-here"

# Or in code:
import os
os.environ['OPENAI_API_KEY'] = 'sk-your-key-here'

# Better: Use .env file
# .env
OPENAI_API_KEY=sk-your-key-here

# Python
from dotenv import load_dotenv
load_dotenv()

Anthropic (Claude)​

1. Go to https://console.anthropic.com/settings/keys
2. Create key
3. Set environment variable:

export ANTHROPIC_API_KEY="sk-ant-your-key-here"

Security Best Practices​

# βœ… DO
export API_KEY="xxx" # Environment variable
# .gitignore
.env

# ❌ DON'T
API_KEY = "sk-xxx" # Hardcoded in code (commits to git!)

How do I handle API rate limits?​

Understanding Rate Limits:

OpenAI (Tier 1):
- 500 requests/minute
- 10,000 tokens/minute

Anthropic:
- Based on your plan
- 50-1000 requests/minute

Handling Strategies:

1. Exponential Backoff​

import time
from openai import OpenAI, RateLimitError

client = OpenAI()

def call_with_retry(prompt, max_retries=5):
for attempt in range(max_retries):
try:
return client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": prompt}]
)
except RateLimitError:
if attempt == max_retries - 1:
raise

wait_time = 2 ** attempt # 1s, 2s, 4s, 8s, 16s
print(f"Rate limit hit. Waiting {wait_time}s...")
time.sleep(wait_time)

2. Rate Limiter​

from time import time, sleep

class RateLimiter:
def __init__(self, requests_per_minute=50):
self.rpm = requests_per_minute
self.timestamps = []

def wait_if_needed(self):
now = time()

# Remove old timestamps (>1 minute ago)
self.timestamps = [t for t in self.timestamps if now - t < 60]

# If at limit, wait
if len(self.timestamps) >= self.rpm:
sleep_time = 60 - (now - self.timestamps[0])
print(f"Rate limit approaching. Waiting {sleep_time:.1f}s...")
sleep(sleep_time)

self.timestamps.append(now)

# Usage
limiter = RateLimiter(requests_per_minute=50)

for task in tasks:
limiter.wait_if_needed()
result = api_call(task)

3. Batch Processing​

# Process in smaller batches
def process_in_batches(items, batch_size=10, delay=1):
results = []

for i in range(0, len(items), batch_size):
batch = items[i:i+batch_size]

for item in batch:
results.append(process(item))

# Wait between batches
if i + batch_size < len(items):
time.sleep(delay)

return results

What if the AI gives wrong code?​

It happens! Here's what to do:

1. Debugging Workflow​

Step 1: Copy error back to AI
"I got this error when running your code:
[paste error message]

From this code:
[paste code]

Please fix it and explain what was wrong."

Step 2: Test the fix
- Don't just replace code
- Understand what changed
- Verify the fix makes sense

Step 3: Learn the pattern
"What category of error is this?
How can I prevent it in the future?
What should I watch for?"

2. Improve Your Prompt​

# ❌ Vague
"Write a function to process data"

# βœ… Specific
"Write a Python function to process CSV data:
- Handle missing values (fill with 0)
- Remove duplicate rows
- Convert date strings to datetime objects
- Handle encoding errors (use utf-8-sig)
- Include error handling for corrupt files
- Return pandas DataFrame
- Include docstring and type hints
- Include example usage"

3. Ask for Alternatives​

"The code you generated has a bug with [specific issue].
Show me 2 different approaches to solve this."

4. Verify with Tests​

# Always write tests for AI-generated code
def test_ai_generated_function():
# Happy path
assert process([1, 2, 3]) == [2, 4, 6]

# Edge cases
assert process([]) == []
assert process([0]) == [0]

# Error cases
with pytest.raises(TypeError):
process("not a list")

5. Use Multiple AIs​

  • Ask same question to ChatGPT and Claude
  • Compare approaches
  • Cross-validate solutions

Can I use AI offline?​

Options:

1. Local Models (Free)​

# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh

# Run local models
ollama run codellama:13b
ollama run deepseek-coder:6.7b

# Use in code
from ollama import Client
client = Client()
response = client.generate(model='codellama', prompt='Write a function...')

Pros: Free, private, offline Cons: Slower, lower quality than GPT-4/Claude

2. Cache API Responses​

import json
import hashlib

class ResponseCache:
def __init__(self, cache_file='ai_cache.json'):
self.cache_file = cache_file
self.cache = self.load()

def load(self):
try:
with open(self.cache_file, 'r') as f:
return json.load(f)
except FileNotFoundError:
return {}

def save(self):
with open(self.cache_file, 'w') as f:
json.dump(self.cache, f)

def get(self, prompt):
key = hashlib.md5(prompt.encode()).hexdigest()
return self.cache.get(key)

def set(self, prompt, response):
key = hashlib.md5(prompt.encode()).hexdigest()
self.cache[key] = response
self.save()

# Usage
cache = ResponseCache()

def ask_ai(prompt):
# Check cache first
cached = cache.get(prompt)
if cached:
print("πŸ“¦ Using cached response")
return cached

# Call API if not cached
response = api_call(prompt)
cache.set(prompt, response)
return response

3. Download VSCode Extensions​

  • GitHub Copilot (online only)
  • TabNine (has offline mode)
  • Codeium (online only)

πŸ”’ Security & Privacy​

Is my code safe when I use AI?​

It depends on what you share:

What AI companies say:

  • βœ… OpenAI: Data not used for training (API)
  • βœ… Anthropic: Data not used for training
  • ⚠️ ChatGPT Free: Conversations may be used for training
  • βœ… Claude: Conversations not used for training

Best practices:

# ❌ DON'T share
- API keys, passwords, tokens
- Customer data (emails, names, etc.)
- Proprietary algorithms
- Trade secrets
- Internal URLs/IPs

# βœ… SAFE to share
- Public code patterns
- Generic algorithms
- Framework questions
- Anonymized examples

Sanitize before sharing:

# Before sharing with AI:
API_KEY = "sk-abc123..." # Change to: "YOUR_API_KEY"
user_email = "john@client.com" # Change to: "user@example.com"
DATABASE_URL = "postgres://..." # Change to: "YOUR_DATABASE_URL"

Should I tell my team I'm using AI?​

Yes! Here's why:

Benefits:

  • βœ… Team can learn from your approach
  • βœ… Establish team guidelines
  • βœ… Share successful prompts
  • βœ… Avoid duplicating work
  • βœ… No stigma - it's a tool
  • βœ… Increase overall productivity

How to introduce it:

Team Meeting Talking Points:

"I've been using AI to speed up development:
- Generate boilerplate (30% faster)
- Debug issues (find bugs I miss)
- Write tests (better coverage)
- Learn new frameworks (4x faster)

I always review the code and test thoroughly.

Want to try? I can share my prompts and workflow.

Should we create team guidelines for AI usage?"

Create team guidelines:

# Team AI Usage Policy

## Allowed
- Code generation for new features
- Bug fixing assistance
- Test generation
- Documentation
- Learning new technologies

## Required
- Always review AI code
- Test thoroughly
- Follow security guidelines
- Don't share customer data
- Don't share credentials

## Recommended
- Share useful prompts
- Document learnings
- Track what works well

Can I use AI-generated code commercially?​

Yes, but with important caveats:

Legal:

  • βœ… You own the output (OpenAI and Anthropic policies)
  • βœ… Can use in commercial projects
  • βœ… Can modify and redistribute

But YOU are responsible for:

  • ⚠️ Code quality and correctness
  • ⚠️ Security vulnerabilities
  • ⚠️ License compliance
  • ⚠️ No copyright infringement
  • ⚠️ Testing and maintenance

Read the terms:

Best practice checklist:

Before using AI code in production:
- [ ] I've reviewed every line
- [ ] I understand how it works
- [ ] I've tested edge cases
- [ ] Security scanner shows no issues
- [ ] No hardcoded secrets
- [ ] Licenses are compatible
- [ ] I can maintain this code
- [ ] I can explain it in code review

🎯 Best Practices​

How do I write good prompts?​

5 principles of effective prompts:

1. Be Specific​

❌ "Write a login function"

βœ… "Create a secure login function in Python using Flask:
- Accept email/password via POST
- Hash passwords with bcrypt
- Return JWT token (1 hour expiry)
- Rate limit: 5 attempts/minute
- Include error handling
- Include docstring
- Include unit tests"

2. Provide Context​

βœ… "I'm building a React 18 app with TypeScript.
I need a custom hook for fetching data with:
- Loading state
- Error handling
- Automatic retries (3 attempts)
- Cancellation on unmount
- TypeScript types

My existing code style:
[paste example]"

3. Show Examples​

βœ… "Generate tests in this format:

describe('add', () => {
it('should add two numbers', () => {
expect(add(2, 3)).toBe(5);
});
});

Now generate tests for [function]."

4. Request Explanations​

βœ… "Write the code and explain:
- Why you chose this approach
- What could go wrong
- How to extend it later"

5. Iterate and Refine​

Iteration 1: "Explain concept X"
Iteration 2: "Show pseudocode"
Iteration 3: "Convert to Python"
Iteration 4: "Add error handling"
Iteration 5: "Optimize performance"

See our Prompt Engineering Guide.


Should junior developers use AI?​

Yes, but with guidance:

Benefits for juniors:

  • βœ… Learn syntax faster
  • βœ… See different approaches
  • βœ… Get unstuck quicker
  • βœ… Understand patterns
  • βœ… Explore new technologies

Risks:

  • ❌ Might not learn to debug independently
  • ❌ Could skip fundamentals
  • ❌ May develop bad habits
  • ❌ Might not understand the code

Recommended approach:

## Junior Developer AI Guidelines

### Phase 1: Learning (Months 1-3)
- Use AI to explain concepts
- Ask "why" not just "how"
- Write code yourself first
- Use AI to review your code
- Practice without AI sometimes

### Phase 2: Guided Use (Months 4-6)
- Use AI for boilerplate
- Always understand before using
- Have senior review AI code
- Document what you learn

### Phase 3: Independent (Months 6+)
- Use AI as productivity tool
- Trusted to review AI output
- Can identify bad AI suggestions
- Help other juniors

### Always
- [ ] Understand before using
- [ ] Test thoroughly
- [ ] Ask for help when stuck
- [ ] Learn the fundamentals

For mentors:

Help juniors by:
- Review their AI usage
- Explain why AI suggestion is good/bad
- Show them better prompts
- Encourage understanding over copying
- Set clear guidelines

How do I measure if AI is helping?​

Track these metrics:

Quantitative Metrics​

## Week 1 (Before AI)
- Features completed: 2
- Bugs fixed: 8
- Tests written: 15
- Code reviews: 6 hours
- Learning new framework: 12 hours
- Stuck time: 8 hours

## Week 2 (With AI)
- Features completed: 4 (2x improvement)
- Bugs fixed: 18 (2.25x improvement)
- Tests written: 40 (2.67x improvement)
- Code reviews: 3 hours (50% faster)
- Learning new framework: 3 hours (4x faster)
- Stuck time: 2 hours (75% reduction)

## ROI Calculation
Time saved: 20 hours/week
Hourly rate: $50
Value: $1,000/week
AI cost: $20/month = $5/week

ROI: 200x return on investment

Qualitative Metrics​

Ask yourself:
- Am I learning faster?
- Am I less frustrated?
- Am I more confident trying new things?
- Is my code quality improving?
- Am I enjoying coding more?

Team Metrics​

Track for 1 month:
Before AI:
- Sprint velocity: 40 points
- Bug rate: 15 bugs/sprint
- Code review time: 10 hours
- Onboarding time: 2 weeks

After AI:
- Sprint velocity: 65 points (+62%)
- Bug rate: 12 bugs/sprint (-20%)
- Code review time: 6 hours (-40%)
- Onboarding time: 1 week (-50%)

πŸ› Troubleshooting​

Why is the AI not understanding my question?​

Common issues and fixes:

1. Too Vague​

❌ "Fix my code"

βœ… "I have a Python function that should sort a list,
but it's throwing a TypeError on line 15.
Here's the code: [paste]
Here's the error: [paste]
Expected: sorted list
Actual: TypeError"

2. Missing Context​

βœ… Always include:
- Language and version (Python 3.11)
- Framework and version (Django 4.2)
- What you've tried
- Full error message
- Expected vs actual behavior

3. Too Much at Once​

❌ "Build me a full social media app"

βœ… Break it down:
1. "Design database schema for users and posts"
2. "Create user authentication endpoints"
3. "Add post creation and retrieval"
4. [etc, step by step]

4. Assuming Knowledge​

βœ… "In my React app [describe architecture],
I have a component that [describe behavior].
When I [action], I expect [result] but get [actual].

Here's the relevant code:
[paste]

Tech stack:
- React 18.2
- TypeScript 5.0
- Redux Toolkit
- React Router 6"

The AI keeps generating similar wrong code​

Try these strategies:

1. Start Fresh Conversation​

  • AI has context from previous messages
  • Starting new chat clears bad context

2. Be More Explicit​

"Stop. The previous solutions all had [specific issue].

Instead, I need:
- [requirement 1]
- [requirement 2]
- [requirement 3]

Do NOT use [bad approach].
Instead use [better approach].

Show me pseudocode first before implementation."

3. Show Counterexample​

"You keep suggesting this:
[paste AI's suggestion]

But that won't work because [explain why].

I need a solution that handles [specific case].
Here's an example of what won't work and why:
[paste example]"

4. Ask for Alternatives​

"Show me 3 completely different approaches to solve this:
1. [Approach A]
2. [Approach B]
3. [Approach C]

Explain pros/cons of each."

5. Try Different AI​

  • ChatGPT vs Claude often give different solutions
  • One might understand your problem better

How do I handle errors in AI-generated code?​

Step-by-step debugging process:

## Step 1: Understand the Error
Don't just copy error to AI immediately.
- Read the error message
- Look at the stack trace
- Identify the line causing the error
- Think about what might be wrong

## Step 2: Try Quick Fixes
- Check for typos
- Verify imports
- Check variable names
- Review function signatures

## Step 3: Ask AI for Help
If still stuck, ask AI:

"I got this error in [language/framework]:

Error message:

[full error]


Code:
```[language]
[paste code]

Line that fails: [line number] What I've tried: [list attempts]

Expected behavior: [describe] Actual behavior: [describe]

Please:

  1. Explain the root cause
  2. Provide a fix with explanation
  3. Explain how to prevent this
  4. Suggest tests to add"

Step 4: Understand the Fix​

Don't just apply the fix:

  • Read the explanation
  • Understand why it was broken
  • Understand why fix works
  • Learn the pattern

Step 5: Prevent Future Issues​

  • Add error handling
  • Add validation
  • Write tests
  • Document edge cases

---

## πŸ“š Learning & Career

### How do I stay updated with AI coding tools?

**Resources to follow:**

#### Official Channels
- OpenAI Blog: https://openai.com/blog
- Anthropic News: https://www.anthropic.com/news
- GitHub Blog: https://github.blog

#### Communities
- r/ChatGPT - Reddit community
- r/OpenAI - OpenAI discussion
- AI Coding Club Discord - [Join us](https://discord.gg/aicodingclub)

#### Newsletters
- TLDR AI - Daily AI news
- The Rundown AI - AI updates
- Our newsletter - AI coding tips

#### What to watch for
```markdown
Check monthly:
- [ ] New model releases
- [ ] Price changes
- [ ] New features
- [ ] API updates
- [ ] Security advisories
- [ ] Best practices updates

What skills should I focus on in the AI era?​

Skills that matter more now:

1. System Design​

AI can generate components, but you need to:
- Design overall architecture
- Make technology choices
- Plan scalability
- Consider trade-offs

2. Prompt Engineering​

Become expert at:
- Writing effective prompts
- Getting desired output
- Iterating on responses
- Combining AI with tools

3. Code Review​

Critical skill:
- Spot AI mistakes
- Identify security issues
- Verify correctness
- Ensure maintainability

4. Problem Decomposition​

Break big problems into:
- Small, solvable pieces
- Clear requirements
- Testable components
- AI can help with pieces

5. Communication​

More important than ever:
- Explain technical decisions
- Work with stakeholders
- Document architecture
- Mentor teammates

6. Domain Knowledge​

Understand:
- Business requirements
- User needs
- Industry standards
- Best practices

Skills still valuable:

  • Debugging complex issues
  • Performance optimization
  • Security expertise
  • Testing strategies
  • DevOps and infrastructure

Are AI coding bootcamps worth it?​

Depends on the bootcamp:

Red flags:

  • ❌ "No coding needed!"
  • ❌ Teaches only AI prompts
  • ❌ No fundamentals
  • ❌ Promises job in weeks
  • ❌ Very expensive ($10k+)

Good signs:

  • βœ… Teaches fundamentals FIRST
  • βœ… AI as productivity tool
  • βœ… Real projects
  • βœ… Code review practice
  • βœ… Reasonable price
  • βœ… Job placement support

Better alternative:

## Self-Study Path (3-6 months)

Month 1-2: Fundamentals
- FreeCodeCamp or The Odin Project
- Learn JavaScript/Python basics
- Build small projects WITHOUT AI

Month 3-4: AI-Assisted Development
- Rebuild projects WITH AI
- Compare approaches
- Learn prompt engineering
- Our tutorials: /docs/tutorials

Month 5-6: Real Projects
- Contribute to open source
- Build portfolio projects
- Use AI to move faster
- Focus on code quality

Cost: Free to $100
Result: Strong fundamentals + AI skills

πŸ”— Still Have Questions?​

Where can I get more help?​

Community Support:

Documentation:

Report Issues:


πŸ“ FAQ Summary​

Top 5 Questions:

  1. Which tool? β†’ GPT-4o (general), DeepSeek (free), Claude (quality)
  2. How much? β†’ Free to $20/month for most developers
  3. Is it safe? β†’ Yes, but don't share sensitive data
  4. Will it replace me? β†’ No, it makes you more productive
  5. How to get started? β†’ Try ChatGPT free, read our tutorial

Quick Tips:

  • βœ… Always review AI code
  • βœ… Learn, don't just copy
  • βœ… Test thoroughly
  • βœ… Check security
  • βœ… Track costs
  • βœ… Be transparent with your team

Last Updated: 2025-11-10 | Version: 2.0

Didn't find your question? Ask in our community or suggest a question