Data Engineering/Spark

Data Engineering/Spark

[Spark] pyspark RDD upper(), lower()

code from pyspark.sql import SparkSession spark = SparkSession.builder \ .appName('0_test_rdd') \ .master('spark://spark-master-1:7077,spark-master-2:7077') \ .config('spark.driver.cores', '2') \ .config('spark.driver.memory','2g') \ .config('spark.executor.memory', '2g') \ .config('spark.executor.cores', '2') \ .config('spark.cores.max', '8') \ .getOrCreate() sc = spark.sparkContext line_1 = 'i..

Data Engineering/Spark

[Spark] spark standalone cluster + zookeeper cluster 로 고가용성 확보하기

docker-compose.yml version: '2.1' services: zookeeper-1: hostname: zookeeper-1 container_name: zookeeper-1 image: zookeeper:3.6 restart: always ports: - 2181:2181 environment: ZOO_MY_ID: 1 ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 volumes: - type: bind source: ./zk-cluster/zookeeper-1/data target: /data read_only: fal..

Data Engineering/Spark

[Zookeeper] 도커로 주키퍼 클러스터 만드는 방법

version: '2.1' services: zookeeper-1: hostname: zookeeper-1 container_name: zookeeper-1 image: zookeeper:3.6 ports: - 2181:2181 environment: ZOO_MY_ID: 1 ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 volumes: - type: bind source: ./zk-cluster/zookeeper-1/data target: /data read_only: false zookeeper-2: hostname: zookeeper..

Data Engineering/Spark

[Spark] To set up memory in a spark session

Code from pyspark.sql import SparkSession spark = SparkSession.builder \ .appName('3_test_sparksession') \ .master('spark://spark-master:17077') \ .config('spark.driver.cores', '1') \ .config('spark.driver.memory','1g') \ .config('spark.executor.memory', '1g') \ .config('spark.executor.cores', '2') \ .config('spark.cores.max', '2') \ .getOrCreate() sc = spark.sparkContext for setting in sc._conf..

Data Engineering/Spark

[Spark] How to use the Global Temporary View

Code from pyspark.sql import SparkSession from pyspark.sql import Row spark = SparkSession.builder \ .appName("1_test_dataframe") \ .master('spark://spark-master:17077') \ .getOrCreate() sc = spark.sparkContext data = [Row(id = 0, name = 'a', age = 12, type = 'A', score = 90, year = 2012), Row(id = 1, name = 'a', age = 15, type = 'B', score = 80, year = 2013), Row(id = 2, name = 'b', age = 15, t..

Data Engineering/Spark

[Spark] Basic way to use spark data frames with sql query

Code from pyspark.sql import SparkSession from pyspark.sql import Row spark = SparkSession.builder \ .appName("1_test_dataframe") \ .master('spark://spark-master:17077') \ .getOrCreate() sc = spark.sparkContext data = [Row(id = 0, name = 'a', age = 12, type = 'A', score = 90, year = 2012), Row(id = 1, name = 'a', age = 15, type = 'B', score = 80, year = 2013), Row(id = 2, name = 'b', age = 15, t..

Data Engineering/Spark

[Spark] Compare user settings in pyspark code

Code from pyspark.conf import SparkConf from pyspark.context import SparkContext conf = SparkConf().setAll([('spark.app.name', '2_test_sparkconf'), ('spark.master', 'spark://spark-master:17077')]) sc = SparkContext(conf = conf) print('first') for setting in sc._conf.getAll(): print(setting) sc.stop() conf = SparkConf().setAll([('spark.app.name', '2_test_sparkconf'), ('spark.master', 'spark://spa..

Data Engineering/Spark

[Spark] To output the default settings for a Spark Session

Code from pyspark.sql import SparkSession spark = SparkSession.builder \ .appName('3_test_sparksession') \ .master('spark://spark-master:17077') \ .getOrCreate() sc = spark.sparkContext for setting in sc._conf.getAll(): print(setting) sc.stop() Result ('spark.driver.port', '39007') ('spark.master', 'spark://spark-master:17077') ('spark.sql.warehouse.dir', 'file:/home/spark/dev/spark-warehouse') ..

Data Engineering/Spark

[Spark] To output spark settings from pyspark code

Code from pyspark.conf import SparkConf from pyspark.context import SparkContext conf = SparkConf().setAll([('spark.app.name', '2_test_sparksession'), ('spark.master', 'spark://spark-master:17077'), ('spark.driver.cores', '1'), ('spark.driver.memory','1g'), ('spark.executor.memory', '1g'), ('spark.executor.cores', '2'), ('spark.cores.max', '2')]) sc = SparkContext(conf = conf) for setting in sc...

박경태
'Data Engineering/Spark' 카테고리의 글 목록 (4 Page)