Age | Commit message (Collapse) | Author |
|
|
|
|
|
after some work, I've decided to update to Apache-Spark-2.3.4 with
Hadoop binaries, to doesn't need to install and link py4j and anothers
modules to work with PySpark for example.
|
|
|
|
|
|
|
|
|
|
|
|
- symlink management improved
- hadoop not needed
- java 9 explicitly incompatible
|
|
|
|
Inspired from https://github.com/huitseeker/apache-spark/pull/1/files
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Squashed commit of the following:
commit 1700b03fcd6b95198564e07345c482dbc85c7c92
Author: Xiang Gao <qasdfgtyuiop@gmail.com>
Date: Sun Jul 3 08:39:50 2016 -0400
add support for slave service
commit 70aecf80a40341836dfa3b1cae222b1db74801f3
Author: Xiang Gao <qasdfgtyuiop@gmail.com>
Date: Sun Jul 3 08:12:07 2016 -0400
fix usage
commit f810e7d98b67732af1814a36a65843f1a9b0fae9
Author: Xiang Gao <qasdfgtyuiop@gmail.com>
Date: Sun Jul 3 08:05:57 2016 -0400
add service for master
commit 0f597e790c48b3487f47508d1eac785b45248bb0
Author: Xiang Gao <qasdfgtyuiop@gmail.com>
Date: Sun Jul 3 04:32:16 2016 -0400
add scripts to run spark in foreground
commit 4eace18e92513bfa5f3fc8fe91a020f55fd67800
Author: Xiang Gao <qasdfgtyuiop@gmail.com>
Date: Fri Jul 1 05:32:16 2016 -0400
add rsync as opt deps
commit a5ea131071667416b14cf95ffbe18bef4e9c968f
Author: Xiang Gao <qasdfgtyuiop@gmail.com>
Date: Wed Jun 29 22:44:05 2016 -0400
add hadoop as dependency
commit bde492e3354cb2164855f79c49c42037c14d42a4
Author: Xiang Gao <qasdfgtyuiop@gmail.com>
Date: Wed Jun 29 22:22:26 2016 -0400
change log files to /var/log/apache-spark
fix sparkenv.sh to load hadoop classpaths
commit 4625ceb7b05832217a4e1a1c46fe16f645d30fc4
Author: Xiang Gao <qasdfgtyuiop@gmail.com>
Date: Wed Jun 29 07:20:35 2016 -0400
fix systemd service file
commit 2354f598de49424942804051f62342b42b3eb1f3
Author: Xiang Gao <qasdfgtyuiop@gmail.com>
Date: Wed Jun 29 07:04:03 2016 -0400
Make a lot of changes:
* upgrade to 1.6.2
* use pre-built binaries "spark--bin-without-hadoop.tgz" instead of compiling from sources
* remove dependency on scala and maven which is already built in spark
* move python2 and hadoop from depends to optdepends
* add r as optdepends
* move templates for conf to conf-templates
* move the whole conf directory to /etc/apache-spark
* move apache-spark directory to /opt
* move the work directory to /var/lib/apache-spark
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|