This is something very interesting,
We can to create a table that is called calendar in power bi. Now do we do that?
Yes, you are write, we will DAX.
))
This is something very interesting,
We can to create a table that is called calendar in power bi. Now do we do that?
Yes, you are write, we will DAX.
))
Database table partitioning in SQL Server (sqlshack.com)
On this link, I came across one of the best tutorials for partitioning concept.
This code will filter the rows where the 'Year' column is 2017 and select only the 'Year' and 'Revenue (GBP Millions)' columns. It will print the filtered rows with the corresponding values for both columns.
Make sure to replace 'Year'
and 'Revenue (GBP Millions)'
with the actual column names in your dataframe. If you have more columns that you want to include in the filtered result, you can add them within the double square brackets, separated by commas.
Note: If you want to perform any calculations or further data analysis on the filtered data, it's recommended to assign it to a new variable, like filtered_data
, rather than modifying the original dataframe directly.
filtered_data = df[df['Year'] == 2017][['Year', 'Revenue (GBP Millions)']]
print(filtered_data)Lets first create a table:
Create database blog_port;
use blog_portal;
-- Create the Employee table
CREATE TABLE Employee (
employee_id INT,
team_id INT,
PRIMARY KEY (employee_id)
);
-- Insert the data into the Employee table
INSERT INTO Employee (employee_id, team_id)
VALUES (1, 8),
(2, 8),
(3, 8),
(4, 7),
(5, 9),
(6, 9);
Write an SQL query to find the team size of each of the employees. Return result table in any order. The query result format is in the following example.
First intuitive way to solve the problem:
with CTE as (Select
team_id, count(team_id) as team_mem_count
from Employee
group by team_id)
select e.employee_id, CTE.team_mem_count
from Employee as e
Right join CTE
on CTE.team_id = e.team_id
order by e.employee_id
;
Best Way: Case When Scalar Grouping
SELECT employee_id,
team_id,
CASE
WHEN team_id IS NULL THEN 0
ELSE (SELECT COUNT(*) FROM Employee WHERE team_id = e.team_id)
END AS team_size
FROM Employee AS e;
In this case, the subquery acts as a scalar subquery,
which can be used in the SELECT statement without requiring a GROUP BY clause.
The subquery is evaluated independently for each row,
providing the corresponding team_size for each employee without the need for explicit grouping.
For example, when the subquery conditions run - WHERE team_id = e.team_id it will only match the team id of the current row to the team ids of the employee as e table. Therefore, keep in mind, each row will return the count(*) for only the matching rows values to the condition.
conda install -c anaconda sqlalchemy
Pip install pandas
Way 1:
Way 2:
Way 3:
1 . Put the data in the uploads folder of the mysql programdata file
2. Create the table
3. use the following code:
Way 4: (When Previous use cases fail)
This is similar to the first one. But, one thing to remember is, this step needs to be added to the first Way if the CSV file column names are different. There are multiple ways of dealing with it. We can go to the CSV file and change the columns one by one. If it is not possible to do it everyday, then we will be using the following piece of code.
"""
Happy Coding.
Simply follow the settings of the screenshots
docker-compose-es.yaml
docker-compose -f docker-compose-es.yaml up -d
docker-compose -f docker-compose-es.yaml ps
docker exec -it materials-airflow-scheduler-1 /bin/bash
curl -X GET 'http://elastic:9200'
Lets create a plugin
docker-compose -f docker-compose-es.yaml ps
docker exec -it materials-airflow-scheduler-1 /bin/bash
docker-compose -f docker-compose-es.yaml stop
docker-compose -f docker-compose-es.yaml up -d
docker exec -it materials-airflow-scheduler-1 /bin/bash
airflow plugins # this command should show you elastic plugins and all the other plugins we have created so far.
Calling the hook inside the dag we created in dags folder naming it elastic_dag.py
You can trigger the dag> task > log to see the output
More about trigger_rule?
go here: Airflow Trigger Rules: All you need to know! - Marc Lamberti
1>> I created a file in project_folder>dags>xcom_dag.py
this was the first way of pushing a xcom. Now, lets have a look at the second way of pushing an xcom
In summary, the _t1(ti)
function pushes the value 42
with the key 'my_key'
to the XCom system for the specific task instance. This allows other tasks downstream in the DAG to access this value using the same key through the XCom system. The data pushed via XCom can be used for sharing information or transferring small amounts of data between tasks within the same DAG.
xcom_pull(key='my_key', task_ids='t1') is used to fetch the data from the previous push. Here is the full code below.
from airflow import DAG