Data Engineering/Spark

Data Engineering/Spark

[Spark] pyspark dataframe 특정 컬럼(열)만 출력하는 방법

from pyspark.sql import SparkSession from pyspark.sql.types import StructType, StructField, StringType, IntegerType from pyspark.sql.functions import col spark = SparkSession \ .builder \ .master('local') \ .appName('my_pyspark_app') \ .getOrCreate() data = [ ('kim', 100), ('kim', 90), ('lee', 80), ('lee', 70), ('park', 60) ] schema = StructType([ \ StructField('name', StringType(), True), \ Str..

Data Engineering/Spark

[Spark] pyspark dataframe 컬럼을 이용해 연산하는 방법

from pyspark.sql import SparkSession from pyspark.sql.types import StructType, StructField, StringType, IntegerType from pyspark.sql.functions import col spark = SparkSession \ .builder \ .master('local') \ .appName('my_pyspark_app') \ .getOrCreate() data = [ ('kim', 100), ('kim', 90), ('lee', 80), ('lee', 70), ('park', 60) ] schema = StructType([ \ StructField('name', StringType(), True), \ Str..

Data Engineering/Spark

[Spark] pyspark dataframe을 원하는 열로 groupby 하는 방법

from pyspark.sql import SparkSession from pyspark.sql.types import StructType, StructField, StringType, IntegerType spark = SparkSession \ .builder \ .master('local') \ .appName('my_pyspark_app') \ .getOrCreate() data = [ ('kim', 'a', 100), ('kim', 'a', 90), ('lee', 'a', 80), ('lee', 'b', 70), ('park', 'b', 60) ] schema = StructType([ \ StructField('name', StringType(), True), \ StructField('cla..

Data Engineering/Spark

[Spark] List로 pyspark dataframe 만드는 방법

from pyspark.sql import SparkSession from pyspark.sql.types import StructType, StructField, StringType, IntegerType spark = SparkSession \ .builder \ .master('local') \ .appName('my_pyspark_app') \ .getOrCreate() data = [ ("kim", 100), ("kim", 90), ("lee", 80), ("lee", 70), ('park', 60) ] schema = StructType([ \ StructField('name', StringType(),True), \ StructField('score', IntegerType(),True) ]..

Data Engineering/Spark

[Spark] Row 함수를 이용해서 Pyspark dataframe 만드는 방법

from pyspark.sql import SparkSession, Row spark = SparkSession \ .builder \ .master('local') \ .appName('my_pyspark_app') \ .getOrCreate() data = [Row(id = 0, name = 'park', score = 100), Row(id = 1, name = 'lee', score = 90), Row(id = 2, name = 'kim', score = 80)] df = spark.createDataFrame(data) df.show()

Data Engineering/Spark

[Spark] pandas dataframe을 pyspark dataframe로 변환하는 방법

import pandas as pd from pyspark.sql import SparkSession spark = SparkSession \ .builder \ .master('local') \ .appName('my_pyspark_app') \ .getOrCreate() df_pandas = pd.DataFrame({ 'id': [0, 1, 2, 3, 4], 'name': ['kim', 'kim', 'park', 'park', 'lee'], 'score': [100, 90, 80, 70, 60] }) df_spark = spark.createDataFrame(df_pandas) print(df_pandas) df_spark.show()

Data Engineering/Spark

[Spark] Pyspark dataframe 안의 List 처리하는 방법

data = { 'parent': [{ 'id': 'id_1', 'category': 'category_1', }, { 'id': 'id_2', 'category': 'category_2', }] } df = spark.createDataFrame([data]) df.printSchema() df.show(truncate=False) df = df.select(explode(df.parent)) df.printSchema() df.show(truncate=False) root |-- parent: array (nullable = true) | |-- element: map (containsNull = true) | | |-- key: string | | |-- value: string (valueCont..

Data Engineering/Spark

[Spark] TypeError: Can not infer schema for type: <class 'str'> 해결 방법

data = { 'parent': [{ 'id': 'id_1', 'category': 'category_1', }, { 'id': 'id_2', 'category': 'category_2', }] } df = spark.createDataFrame(data) df.printSchema() Fail to execute line 49: df = spark.createDataFrame(data) Traceback (most recent call last): File "/tmp/python16708257068745741506/zeppelin_python.py", line 162, in exec(code, _zcUserQueryNameSpace) File "", line 49, in File "/usr/local..

Data Engineering/Spark

[Spark] Pyspark json List를 처리하는 방법

data = [{ 'id': 'id_1', 'category': 'category_1' }, { 'id': 'id_2', 'category': 'category_2' }] schema = MapType(StringType(), StringType()) df = spark.createDataFrame(data, schema) df.printSchema() df.show(truncate=False) df.withColumn('id', df.value.id).withColumn('category', df.value.category).drop('value').show()

Data Engineering/Spark

[Spark] Pyspark List+Json 확인하는 방법

data = [{ 'id': 'id_1', 'category': 'category_1' }, { 'id': 'id_2', 'category': 'category_2' }] df = spark.createDataFrame(data) df.printSchema() df.show() schema = StructType([ StructField('id', StringType()), StructField('category', StringType()) ]) df = spark.createDataFrame(data, schema) df.printSchema() df.show()

박경태
'Data Engineering/Spark' 카테고리의 글 목록 (2 Page)