fugue_spark
fugue_spark.dataframe
- class fugue_spark.dataframe.SparkDataFrame(df=None, schema=None)[source]
Bases:
DataFrame
DataFrame that wraps Spark DataFrame. Please also read the DataFrame Tutorial to understand this Fugue concept
- Parameters:
df (Any) –
pyspark.sql.DataFrame
schema (Any) – Schema like object or
pyspark.sql.types.StructType
, defaults to None.
Note
You should use
fugue_spark.execution_engine.SparkExecutionEngine.to_df()
instead of construction it by yourself.If
schema
is set, then there will be type cast on the Spark DataFrame if the schema is different.
- property alias: str
- alter_columns(columns)[source]
Change column types
- Parameters:
columns (Any) – Schema like object, all columns should be contained by the dataframe schema
- Returns:
a new dataframe with altered columns, the order of the original schema will not change
- Return type:
- as_array(columns=None, type_safe=False)[source]
Convert to 2-dimensional native python array
- Parameters:
columns (List[str] | None) – columns to extract, defaults to None
type_safe (bool) – whether to ensure output conforms with its schema, defaults to False
- Returns:
2-dimensional native python array
- Return type:
List[Any]
Note
If
type_safe
is False, then the returned values are ‘raw’ values.
- as_array_iterable(columns=None, type_safe=False)[source]
Convert to iterable of native python arrays
- Parameters:
columns (List[str] | None) – columns to extract, defaults to None
type_safe (bool) – whether to ensure output conforms with its schema, defaults to False
- Returns:
iterable of native python arrays
- Return type:
Iterable[Any]
Note
If
type_safe
is False, then the returned values are ‘raw’ values.
- as_arrow(type_safe=False)[source]
Convert to pyArrow DataFrame
- Parameters:
type_safe (bool)
- Return type:
- as_dict_iterable(columns=None)[source]
Convert to iterable of python dicts
- Parameters:
columns (List[str] | None) – columns to extract, defaults to None
- Returns:
iterable of python dicts
- Return type:
Iterable[Dict[str, Any]]
Note
The default implementation enforces
type_safe
True
- as_dicts(columns=None)[source]
Convert to a list of python dicts
- Parameters:
columns (List[str] | None) – columns to extract, defaults to None
- Returns:
a list of python dicts
- Return type:
List[Dict[str, Any]]
Note
The default implementation enforces
type_safe
True
- as_local_bounded()[source]
Convert this dataframe to a
LocalBoundedDataFrame
- Return type:
- property empty: bool
Whether this dataframe is empty
- head(n, columns=None)[source]
Get first n rows of the dataframe as a new local bounded dataframe
- Parameters:
n (int) – number of rows
columns (List[str] | None) – selected columns, defaults to None (all columns)
- Returns:
a local bounded dataframe
- Return type:
- property is_bounded: bool
Whether this dataframe is bounded
- property is_local: bool
Whether this dataframe is a local Dataset
- property native: DataFrame
The wrapped Spark DataFrame
- Return type:
- native_as_df()[source]
The dataframe form of the native object this Dataset class wraps. Dataframe form means the object contains schema information. For example the native an ArrayDataFrame is a python array, it doesn’t contain schema information, and its
native_as_df
should be either a pandas dataframe or an arrow dataframe.- Return type:
DataFrame
- property num_partitions: int
Number of physical partitions of this dataframe. Please read the Partition Tutorial
- peek_array()[source]
Peek the first row of the dataframe as array
- Raises:
FugueDatasetEmptyError – if it is empty
- Return type:
List[Any]
fugue_spark.execution_engine
- class fugue_spark.execution_engine.SparkExecutionEngine(spark_session=None, conf=None)[source]
Bases:
ExecutionEngine
The execution engine based on
SparkSession
.Please read the ExecutionEngine Tutorial to understand this important Fugue concept
- Parameters:
spark_session (SparkSession | None) – Spark session, defaults to None to get the Spark session by
getOrCreate()
conf (Any) – Parameters like object defaults to None, read the Fugue Configuration Tutorial to learn Fugue specific options
- broadcast(df)[source]
Broadcast the dataframe to all workers for a distributed computing framework
- Parameters:
df (DataFrame) – the input dataframe
- Returns:
the broadcasted dataframe
- Return type:
- dropna(df, how='any', thresh=None, subset=None)[source]
Drop NA recods from dataframe
- Parameters:
df (DataFrame) – DataFrame
how (str) – ‘any’ or ‘all’. ‘any’ drops rows that contain any nulls. ‘all’ drops rows that contain all nulls.
thresh (int | None) – int, drops rows that have less than thresh non-null values
subset (List[str] | None) – list of columns to operate on
- Returns:
DataFrame with NA records dropped
- Return type:
- fillna(df, value, subset=None)[source]
Fill
NULL
,NAN
,NAT
values in a dataframe- Parameters:
df (DataFrame) – DataFrame
value (Any) – if scalar, fills all columns with same value. if dictionary, fills NA using the keys as column names and the values as the replacement values.
subset (List[str] | None) – list of columns to operate on. ignored if value is a dictionary
- Returns:
DataFrame with NA records filled
- Return type:
- get_current_parallelism()[source]
Get the current number of parallelism of this engine
- Return type:
int
- intersect(df1, df2, distinct=True)[source]
Intersect
df1
anddf2
- Parameters:
- Returns:
the unioned dataframe
- Return type:
Note
Currently, the schema of
df1
anddf2
must be identical, or an exception will be thrown.
- property is_distributed: bool
Whether this engine is a distributed engine
- property is_spark_connect: bool
- join(df1, df2, how, on=None)[source]
Join two dataframes
- Parameters:
df1 (DataFrame) – the first dataframe
df2 (DataFrame) – the second dataframe
how (str) – can accept
semi
,left_semi
,anti
,left_anti
,inner
,left_outer
,right_outer
,full_outer
,cross
on (List[str] | None) – it can always be inferred, but if you provide, it will be validated against the inferred keys.
- Returns:
the joined dataframe
- Return type:
Note
Please read
get_join_schemas()
- load_df(path, format_hint=None, columns=None, **kwargs)[source]
Load dataframe from persistent storage
- Parameters:
path (str | List[str]) – the path to the dataframe
format_hint (Any | None) – can accept
parquet
,csv
,json
, defaults to None, meaning to infercolumns (Any | None) – list of columns or a Schema like object, defaults to None
kwargs (Any) – parameters to pass to the underlying framework
- Returns:
an engine compatible dataframe
- Return type:
For more details and examples, read Zip & Comap.
- property log: Logger
Logger of this engine instance
- persist(df, lazy=False, **kwargs)[source]
Force materializing and caching the dataframe
- Parameters:
df (DataFrame) – the input dataframe
lazy (bool) –
True
: first usage of the output will trigger persisting to happen;False
(eager): persist is forced to happend immediately. Default toFalse
kwargs (Any) – parameter to pass to the underlying persist implementation
- Returns:
the persisted dataframe
- Return type:
Note
persist
can only guarantee the persisted dataframe will be computed for only once. However this doesn’t mean the backend really breaks up the execution dependency at the persisting point. Commonly, it doesn’t cause any issue, but if your execution graph is long, it may cause expected problems for example, stack overflow.
- repartition(df, partition_spec)[source]
Partition the input dataframe using
partition_spec
.- Parameters:
df (DataFrame) – input dataframe
partition_spec (PartitionSpec) – how you want to partition the dataframe
- Returns:
repartitioned dataframe
- Return type:
Note
Before implementing please read the Partition Tutorial
- sample(df, n=None, frac=None, replace=False, seed=None)[source]
Sample dataframe by number of rows or by fraction
- Parameters:
df (DataFrame) – DataFrame
n (int | None) – number of rows to sample, one and only one of
n
andfrac
must be setfrac (float | None) – fraction [0,1] to sample, one and only one of
n
andfrac
must be setreplace (bool) – whether replacement is allowed. With replacement, there may be duplicated rows in the result, defaults to False
seed (int | None) – seed for randomness, defaults to None
- Returns:
sampled dataframe
- Return type:
- save_df(df, path, format_hint=None, mode='overwrite', partition_spec=None, force_single=False, **kwargs)[source]
Save dataframe to a persistent storage
- Parameters:
df (DataFrame) – input dataframe
path (str) – output path
format_hint (Any | None) – can accept
parquet
,csv
,json
, defaults to None, meaning to infermode (str) – can accept
overwrite
,append
,error
, defaults to “overwrite”partition_spec (PartitionSpec | None) – how to partition the dataframe before saving, defaults to empty
force_single (bool) – force the output as a single file, defaults to False
kwargs (Any) – parameters to pass to the underlying framework
- Return type:
None
For more details and examples, read Load & Save.
- property spark_session: SparkSession
- Returns:
The wrapped spark session
- Return type:
- subtract(df1, df2, distinct=True)[source]
df1 - df2
- Parameters:
- Returns:
the unioned dataframe
- Return type:
Note
Currently, the schema of
df1
anddf2
must be identical, or an exception will be thrown.
- take(df, n, presort, na_position='last', partition_spec=None)[source]
Get the first n rows of a DataFrame per partition. If a presort is defined, use the presort before applying take. presort overrides partition_spec.presort. The Fugue implementation of the presort follows Pandas convention of specifying NULLs first or NULLs last. This is different from the Spark and SQL convention of NULLs as the smallest value.
- Parameters:
df (DataFrame) – DataFrame
n (int) – number of rows to return
presort (str) – presort expression similar to partition presort
na_position (str) – position of null values during the presort. can accept
first
orlast
partition_spec (PartitionSpec | None) – PartitionSpec to apply the take operation
- Returns:
n rows of DataFrame per partition
- Return type:
- to_df(df, schema=None)[source]
Convert a data structure to
SparkDataFrame
- Parameters:
data –
DataFrame
,pyspark.sql.DataFrame
,pyspark.RDD
, pandas DataFrame or list or iterable of arraysschema (Any | None) – Schema like object or
pyspark.sql.types.StructType
defaults to None.df (Any)
- Returns:
engine compatible dataframe
- Return type:
Note
if the input is already
SparkDataFrame
, it should return itselfFor
RDD
, list or iterable of arrays,schema
must be specifiedWhen
schema
is not None, a potential type cast may happen to ensure the dataframe’s schema.all other methods in the engine can take arbitrary dataframes and call this method to convert before doing anything
- class fugue_spark.execution_engine.SparkMapEngine(execution_engine)[source]
Bases:
MapEngine
- Parameters:
execution_engine (ExecutionEngine)
- property is_distributed: bool
Whether this engine is a distributed engine
- property is_spark_connect: bool
Whether the spark session is created by spark connect
- map_dataframe(df, map_func, output_schema, partition_spec, on_init=None, map_func_format_hint=None)[source]
Apply a function to each partition after you partition the dataframe in a specified way.
- Parameters:
df (DataFrame) – input dataframe
map_func (Callable[[PartitionCursor, LocalDataFrame], LocalDataFrame]) – the function to apply on every logical partition
output_schema (Any) – Schema like object that can’t be None. Please also understand why we need this
partition_spec (PartitionSpec) – partition specification
on_init (Callable[[int, DataFrame], Any] | None) – callback function when the physical partition is initializaing, defaults to None
map_func_format_hint (str | None) – the preferred data format for
map_func
, it can bepandas
, pyarrow, etc, defaults to None. Certain engines can provide the most efficient map operations based on the hint.
- Returns:
the dataframe after the map operation
- Return type:
Note
Before implementing, you must read this to understand what map is used for and how it should work.
- class fugue_spark.execution_engine.SparkSQLEngine(execution_engine)[source]
Bases:
SQLEngine
Spark SQL execution implementation.
- Parameters:
execution_engine (ExecutionEngine) – it must be
SparkExecutionEngine
- Raises:
ValueError – if the engine is not
SparkExecutionEngine
- property dialect: str | None
- property execution_engine_constraint: Type[ExecutionEngine]
This defines the required ExecutionEngine type of this facet
- Returns:
a subtype of
ExecutionEngine
- property is_distributed: bool
Whether this engine is a distributed engine
- select(dfs, statement)[source]
Execute select statement on the sql engine.
- Parameters:
dfs (DataFrames) – a collection of dataframes that must have keys
statement (StructuredRawSQL) – the
SELECT
statement using thedfs
keys as tables.
- Returns:
result of the
SELECT
statement- Return type:
Examples
dfs = DataFrames(a=df1, b=df2) sql_engine.select( dfs, [(False, "SELECT * FROM "), (True,"a"), (False," UNION SELECT * FROM "), (True,"b")])
Note
There can be tables that is not in
dfs
. For example you want to select from hive without input DataFrames:>>> sql_engine.select(DataFrames(), "SELECT * FROM hive.a.table")