سوالات پرتکرار | FAQ

این‌جا سوالات پرتکراری که تو مصاحبه می‌پرسن رو به همراه جواب، یک‌جا جمع کردم.

bullshit_everywhere_meme

Go

Is go an OOP language?
(3 times - ozone, hamkaran-system, sternx)

Go has inheritance concept?

No. Inheritance means inheriting the properties of the superclass into the base class and is one of the most important concepts in Object-Oriented Programming. Since Golang does not support classes, so inheritance takes place through struct embedding. We cannot directly extend structs but rather use a concept called composition where the struct is used to form other objects. So, you can say there is No Inheritance Concept in Golang.

What is Mutex?

What is goroutine?

What is channel?

What is waitgroup?

How go manage memory? how garbage collector work?

Python

What is decorator?
(5 times - digikala, siz-tel, exalab, karnameh, sternx)

Why are we able to change python tuple values even though they are immutable? Suppose:
(1 times - snapp)

a = 1
b = 2
a, b = b, a

Tuple Packing: When you write a, b = 1, 2 Python is packing the values 1 and 2 into a tuple (1, 2), and then unpacking them into the variables a and b. This is a convenient way to assign multiple variables at once.

a, b = 1, 2
# This is equivalent to:
# temp_tuple = (1, 2)
# a = temp_tuple[0]
# b = temp_tuple[1]

Tuple Unpacking: When you write a, b = b, a Python is creating a tuple (b, a) with the current values of b and a, and then unpacking this tuple back into the variables a and b. This is a common idiom in Python for swapping the values of two variables without needing a temporary variable.

a, b = b, a
# This is equivalent to:
# temp_tuple = (b, a)
# a = temp_tuple[0]
# b = temp_tuple[1]

Why This Works Immutability of Tuples: The immutability of tuples means that the tuple itself cannot be changed after it is created. However, this doesn't prevent you from creating new tuples or reassigning variables to new tuples. Variable Reassignment: The operation a, b = b, a involves creating a new tuple and then reassigning the variables a and b to the elements of this new tuple. The original tuple (a, b) is not modified; instead, new tuples are created and the variables are updated to reference these new tuples.

Python is call by reference or call by value?
(1 times - narvan)

In Python, the concepts of "call by value" and "call by reference" are often discussed, but Python actually uses a mechanism sometimes referred to as "call by object reference" or "call by assignment" . Here's what this means: Call by Object Reference Immutable Objects: When you pass immutable objects (like integers, strings, or tuples) to a function, Python behaves similarly to "call by value." This means that the function receives a copy of the reference to the object, and since the object itself cannot be changed, any modifications within the function do not affect the original object outside the function. For example, if you pass a string to a function and attempt to change it, the original string remains unchanged . Mutable Objects: When you pass mutable objects (like lists, dictionaries, or sets) to a function, Python allows changes made within the function to affect the original object outside the function. This is because the function receives a reference to the object, and since the object is mutable, modifications are reflected in the original object. This behavior is often likened to "call by reference"

How does python manage memory? Explain python memory management.

What is the difference between concurrency in python and go? todo

Database

SQL vs NoSQL.
(4 times - snapp, exalab, siz-tel, narvan)

What is ACID?
(3 times - digikala, karnameh, mhholding)

Why redis is fast?

Say some redis data structures.

What is indexing? How database index columns?
(3 times - snappshop, snapp, karnameh, quiz of kings)

Why use indexing a lot, is a bad thing?
(3 times - snappshop, snapp, karnameh)

  1. Increased Storage Requirements
    While having more space might seem beneficial, each index consumes additional disk space. For large databases, this can lead to significant storage overhead, especially if many indexes are created on various columns that may not be frequently queried.

  2. Slower Write Operations
    Every time a record is inserted, updated, or deleted, all associated indexes must also be updated. This can slow down write operations considerably. For databases with high transaction volumes, the overhead of maintaining numerous indexes can lead to performance bottlenecks.

  3. Index Maintenance Overhead
    Indexes require regular maintenance to ensure they remain efficient. Over time, as data is modified, indexes can become fragmented, which can degrade performance. This maintenance can be resource-intensive, requiring additional processing time and effort.

  4. Diminished Query Performance
    While indexes are designed to speed up read operations, having too many can lead to confusion for the query optimizer. The optimizer may struggle to determine which index to use for a given query, potentially leading to suboptimal execution plans and slower performance.

  5. Complexity in Query Optimization
    With many indexes, the complexity of the query optimization process increases. The database management system (DBMS) must evaluate multiple indexes to determine the most efficient way to execute a query. This can lead to longer planning times and may not always result in the best performance.

  6. Reduced Performance for Certain Queries
    Some queries may not benefit from additional indexes, particularly those that involve complex joins or aggregations. In such cases, the overhead of maintaining multiple indexes can outweigh the performance benefits, leading to slower overall query execution.

What are database isolation levels?

We have a query that is too slow, how you try to fast it?
(2 times, snappshop, hamkaran system)

  1. Analyze the Query Execution Plan
    Use EXPLAIN: Run the query with the EXPLAIN command (or EXPLAIN ANALYZE for more detailed output) to understand how the database engine executes the query. This will provide insights into which indexes are being used, join methods, and where potential bottlenecks lie. Identify Slow Operations: Look for operations that have high costs, such as full table scans, large sorts, or expensive joins.

  2. Optimize Index Usage
    Create Indexes: Ensure that appropriate indexes are in place for columns used in WHERE, JOIN, ORDER BY, and GROUP BY clauses. Review Existing Indexes: Check if existing indexes are being utilized effectively. Sometimes, redundant or unused indexes can slow down write operations. Consider Composite Indexes: If multiple columns are frequently queried together, consider creating composite indexes.

  3. Rewrite the Query
    Simplify the Query: Break down complex queries into simpler sub-queries or Common Table Expressions (CTEs) to improve readability and performance. *Avoid SELECT : Instead of selecting all columns, specify only the columns you need. This reduces the amount of data processed and transferred. Use EXISTS Instead of IN: If applicable, using EXISTS can be faster than IN for subqueries, especially when dealing with large datasets.

  4. Optimize Joins
    Check Join Conditions: Ensure that join conditions are using indexed columns. Limit the Number of Joins: If possible, reduce the number of joins or rearrange them to optimize performance. Use INNER JOIN Instead of OUTER JOIN: If you don’t need all rows from both tables, prefer INNER JOIN as it can be more efficient.

  5. Use Query Caching
    Enable Query Caching: If your database supports it, enable query caching for frequently executed queries. This can significantly reduce execution time for repeated queries.

  6. Partition Large Tables
    Table Partitioning: For very large tables, consider partitioning them based on certain criteria (e.g., date ranges). This can improve query performance by limiting the amount of data scanned.

  7. Optimize Database Configuration
    Tune Database Settings: Review and optimize database configuration settings such as memory allocation, cache sizes, and connection limits based on your workload.

  8. Monitor and Analyze Performance
    Use Monitoring Tools: Employ database monitoring tools to track query performance over time and identify trends or recurring issues. Log Slow Queries: Enable slow query logging to capture queries that exceed a certain execution time, allowing you to focus on optimizing the most problematic queries.

  9. Consider Denormalization
    Denormalization: In some cases, denormalizing the database schema (i.e., combining tables) can improve performance for read-heavy applications, at the cost of increased complexity for write operations.

  10. Review Application Logic
    Optimize Application Code: Sometimes, the issue may not be with the query itself but with how it is called from the application. Review the application logic to ensure that it is making efficient use of database queries.

Git

Merge vs Rebase... explain differneces and pros and cons.
(3 times - snappshop, wallex, sternx)

What was your git flow at your previous company?
(3 times - snappshop, wallex, digikala)

What is fast-forward?
(1 times - snappshop)

Design Pattern

What is SOLID?
(4 times - snapp, wallex, digikala, itoll)