Data Engineering

Data Engineering/Spark

[Spark] ValueError: field score: This field is not nullable, but got None

다음과 같은 에러가 발생 Traceback (most recent call last): File "df_schema_null.py", line 23, in df = spark.createDataFrame(data = data, schema = schema) File "/Users/pgt0409/opt/anaconda3/envs/py38/lib/python3.8/site-packages/pyspark/sql/session.py", line 894, in createDataFrame return self._create_dataframe( File "/Users/pgt0409/opt/anaconda3/envs/py38/lib/python3.8/site-packages/pyspark/sql/session.py"..

Data Engineering/Spark

[Spark] pyspark dataframe 생성시 schema data type 설정 방법

from pyspark.sql import SparkSession from pyspark.sql.types import StructType, StructField, StringType, IntegerType spark = SparkSession \ .builder \ .master('local') \ .appName('my_pyspark_app') \ .getOrCreate() data = [ ('kim', 100), ('kim', 90), ('lee', 80), ('lee', 70), ('park', 60) ] schema = ['name', 'score'] df = spark.createDataFrame(data = data, schema = schema) df.printSchema() df.show..

Data Engineering/Spark

[Spark] pyspark dataframe 의 특정 열을 list로 만드는 방법

from pyspark.sql import SparkSession from pyspark.sql.types import StructType, StructField, StringType, IntegerType spark = SparkSession \ .builder \ .master('local') \ .appName('my_pyspark_app') \ .getOrCreate() data = [ ('kim', 100), ('kim', 90), ('lee', 80), ('lee', 70), ('park', 60) ] schema = StructType([ \ StructField('name', StringType(), True), \ StructField('score', IntegerType(), True)..

Data Engineering/Spark

[Spark] pyspark dataframe을 리스트로 만드는 가장 좋고 빠른 방법

+-------------------------------------------------------------+---------+-------------+ | Code | 100,000 | 100,000,000 | +-------------------------------------------------------------+---------+-------------+ | df.select("col_name").rdd.flatMap(lambda x: x).collect() | 0.4 | 55.3 | | list(df.select('col_name').toPandas()['col_name']) | 0.4 | 17.5 | | df.select('col_name').rdd.map(lambda row : ro..

Data Engineering/Spark

[Spark] pyspark dataframe 특정 컬럼(열)만 출력하는 방법

from pyspark.sql import SparkSession from pyspark.sql.types import StructType, StructField, StringType, IntegerType from pyspark.sql.functions import col spark = SparkSession \ .builder \ .master('local') \ .appName('my_pyspark_app') \ .getOrCreate() data = [ ('kim', 100), ('kim', 90), ('lee', 80), ('lee', 70), ('park', 60) ] schema = StructType([ \ StructField('name', StringType(), True), \ Str..

Data Engineering/Spark

[Spark] pyspark dataframe 컬럼을 이용해 연산하는 방법

from pyspark.sql import SparkSession from pyspark.sql.types import StructType, StructField, StringType, IntegerType from pyspark.sql.functions import col spark = SparkSession \ .builder \ .master('local') \ .appName('my_pyspark_app') \ .getOrCreate() data = [ ('kim', 100), ('kim', 90), ('lee', 80), ('lee', 70), ('park', 60) ] schema = StructType([ \ StructField('name', StringType(), True), \ Str..

Data Engineering/Spark

[Spark] pyspark dataframe을 원하는 열로 groupby 하는 방법

from pyspark.sql import SparkSession from pyspark.sql.types import StructType, StructField, StringType, IntegerType spark = SparkSession \ .builder \ .master('local') \ .appName('my_pyspark_app') \ .getOrCreate() data = [ ('kim', 'a', 100), ('kim', 'a', 90), ('lee', 'a', 80), ('lee', 'b', 70), ('park', 'b', 60) ] schema = StructType([ \ StructField('name', StringType(), True), \ StructField('cla..

Data Engineering/Spark

[Spark] List로 pyspark dataframe 만드는 방법

from pyspark.sql import SparkSession from pyspark.sql.types import StructType, StructField, StringType, IntegerType spark = SparkSession \ .builder \ .master('local') \ .appName('my_pyspark_app') \ .getOrCreate() data = [ ("kim", 100), ("kim", 90), ("lee", 80), ("lee", 70), ('park', 60) ] schema = StructType([ \ StructField('name', StringType(),True), \ StructField('score', IntegerType(),True) ]..

Data Engineering/Spark

[Spark] Row 함수를 이용해서 Pyspark dataframe 만드는 방법

from pyspark.sql import SparkSession, Row spark = SparkSession \ .builder \ .master('local') \ .appName('my_pyspark_app') \ .getOrCreate() data = [Row(id = 0, name = 'park', score = 100), Row(id = 1, name = 'lee', score = 90), Row(id = 2, name = 'kim', score = 80)] df = spark.createDataFrame(data) df.show()

Data Engineering/Spark

[Spark] pandas dataframe을 pyspark dataframe로 변환하는 방법

import pandas as pd from pyspark.sql import SparkSession spark = SparkSession \ .builder \ .master('local') \ .appName('my_pyspark_app') \ .getOrCreate() df_pandas = pd.DataFrame({ 'id': [0, 1, 2, 3, 4], 'name': ['kim', 'kim', 'park', 'park', 'lee'], 'score': [100, 90, 80, 70, 60] }) df_spark = spark.createDataFrame(df_pandas) print(df_pandas) df_spark.show()

박경태
'Data Engineering' 카테고리의 글 목록 (7 Page)