Thursday, February 16, 2023

data structure basic

 As a data professional, understanding data structures is essential to optimizing your code and making it more efficient. Here are 10 key points to keep in mind:


1. Data structures are tools that enable you to store and manipulate data effectively. They include arrays, linked lists, stacks, queues, trees, and more

2. Each data structure has its own unique properties and advantages, so it's important to choose the right one for your needs

3. Arrays are useful for storing and accessing data quickly, while linked lists are better for dynamic data that needs to be updated frequently

4. Stacks and queues are often used for managing workflows, and trees and graphs are useful for representing hierarchical or networked data

5. It's important to understand the time and space complexity of different data structures, as they can have a big impact on the performance of your code

6. Understanding the trade-offs between different data structures is crucial when optimizing code. For example, while hash tables have very fast lookup times, they can be memory-intensive and have a higher chance of collisions than other data structures

7. Memory allocation and deallocation are important considerations when working with data structures. In some cases, it may be more efficient to pre-allocate memory for a data structure rather than allocating and deallocating it dynamically

8. Advanced data structures like self-balancing binary search trees and hash tables with open addressing can be powerful tools for handling large amounts of data efficiently. However, they also require a deeper understanding of algorithms and data structures

9. While data structures are a fundamental part of computer science, they are just one tool in your toolbox. When designing algorithms, it's important to consider the entire problem and choose the best approach based on factors like time complexity, space complexity, and maintainability

10. Finally, it's worth noting that choosing the right data structure is just the first step. You also need to know how to implement it effectively and optimize it for your use case

Wednesday, February 15, 2023

Short Overview of SQL commands

DDL commands:

1. Create 

2. Alter 

3. Drop 

4. Truncate

5. Rename 


DML commands:

1.  Select

2.  Insert

3. Update

4. Delete


DCL commands:

1. Grant

2. Revoke



Tuesday, February 14, 2023

python app on AWS Lambda


  1. Create a new AWS Lambda function: Go to the AWS Lambda console and create a new function by selecting "Create function". You can choose to start with a blueprint or create a function from scratch.

  2. Choose Python as the runtime: For the runtime, choose Python. You can also choose the version of Python you want to use.

  3. Write your Python code: You can write your Python code in the inline code editor in the AWS Management Console or you can upload a .zip file containing your code.

  4. Configure your function: You need to configure your function's triggers and other settings, such as environment variables, memory size, and timeout. You can do this in the AWS Management Console or using the AWS CLI.

  5. Deploy your function: After writing your code and configuring your function, you can deploy it by clicking the "Deploy" button in the AWS Management Console or using the AWS CLI.

  6. Test your function: You can test your function in the AWS Management Console by providing test inputs and checking the function's output.

  7. Monitor your function: You can monitor your function's performance, invocations, and error rates using Amazon CloudWatch.

sys module

 sys module contains a lot of information about pythons import system. First of all , the list of modules currently imported is available through the sys.modules variable. Its a dictionary where the key is the module name and the value is the module object.

>> sys.modules['os']

< module 'os' from /usr/lib/python2.7/os.pyc'>


Standard libraries :


atexit allows you to register functions to call when your program exits.

argparse provides functions for parsing command line arguments.

bisect provides bisection algorithms for sorting lists

calendar provides a number of date-related functions

codecs provides functions for encoding and decoding data

collections provides a variety of useful data structures.

copy provide functions for copying data.

csv provides functions for reading and writing CSV files

datetime provides classes for handling dates and times 

fnmatch provides functions for matching unix-style filename patterns.



git commands - part 2

 git rm - it removes files from the repository and the file system.

 git mv - it renames or move files within the repository.

 git cherry-pick - It selectively applies changes from a specific commit to the current branch.

 git revert - it undoes changes by creating a new commit that reverses previous commits.

git clean - it removes untracked files and directories from the working directory.

git archive - creates a zip archive of the repository

git bisect - it performs a binary search through the commit history to find a specific change.

git submodule - it includes one git repository within another as a sub-directory.

git grep - it searches for a specific string or pattern in the repository.

git lfs - it manages large files and binary assets in a git repository.

 


Python List Methods

 


Forward proxy vs Reverse proxy

 Forward Proxy

---------------
A forward proxy, also known as a "proxy server," or simply "proxy," is a server that sits in front of one or more client machines and acts as an intermediary between the clients and the internet. When a client machine makes a request to a resource on the internet, the request is first sent to the forward proxy. The forward proxy then forwards the request to the internet on behalf of the client machine and returns the response to the client machine.

A forwards proxy is mostly used for:
1. Client Anonymity
2. Caching
3. Traffic Control
4. Logging
5. Request/Response Transformation
6. Encryption

Reverse Proxy
---------------
A reverse proxy is a server that sits in front of one or more web servers and acts as an intermediary between the web servers and the Internet. When a client makes a request to a resource on the internet, the request is first sent to the reverse proxy. The reverse proxy then forwards the request to one of the web servers, which returns the response to the reverse proxy. The reverse proxy then returns the response to the client.

A reverse proxy is mostly used for:
1. Server Anonymity
2. Caching
3. Load Balancing
4. DDoS Protection
5. Canary Experimentation
6. URL/Content Rewriting