X-Git-Url: https://git.librecmc.org/?a=blobdiff_plain;f=README.md;h=b06d85b3baea2136745765bfc01b5fa82b391e29;hb=1e6e409b355aac1b9030bf797ee0f43bc37a6ac7;hp=202bc998e32da24c1fee8195d78d24debed8e68f;hpb=d855fc23064657575b86a0d9166e2ca2cfb63ead;p=oweals%2Fkarmaworld.git diff --git a/README.md b/README.md index 202bc99..b06d85b 100644 --- a/README.md +++ b/README.md @@ -42,447 +42,270 @@ directory underneath that (`{project_root}/karmaworld`) alongside files like Notice: This software makes use of external third party services which require accounts to access the service APIs. Without these third parties available, -this software may require considerable overhaul. - -### Filepicker -This software uses [Filepicker.io](https://www.inkfilepicker.com/) for uploading -files. This requires an account with Filepicker. - -Filepicker requires an additional third party file hosting site where it may -send uploaded files. For this project, we have used Amazon S3. - -Filepicker will provide an API key. This is needed by the software. +this software may require considerable overhaul. These services have +API keys, credentials, and other information that you must provide to KarmaNotes +as environment variables. The best way to persist these environment variables is +by using a `.env` file. Copy `.env.example` to `.env` and populate the fields as +required. + +### Heroku +This project has chosen to use [Heroku](www.heroku.com) to host the Django and +celery software. While not a hard requirement, the more up-to-date parts of this +documentation will operate assuming Heroku is in use. + +See README.heroku for more information. + +### Celery Queue +Celery uses the Apache Message Queueing Protocol for passing messages to its workers. +We recommend using Heroku's CloudAMQP add-on, getting your own CloudAMQP account, or +running a queueing system on your own. The `CLOUDAMQP_URL` environment variable must be set correctly +for KarmaNotes to be able to use Celery. The `CELERY_QUEUE_NAME` environment variable +must be set to the name of the queue you wish to use. Settings this to something unique +allows multiple instances of KarmaNotes (or some other software) to share the same queueing server. ### Amazon S3 +The instructions for creating an [S3](http://aws.amazon.com/s3/) bucket may be +[found on Amazon.](http://docs.aws.amazon.com/AmazonS3/latest/gsg/CreatingABucket.html) -#### for Filepicker -This software uses [Amazon S3](http://aws.amazon.com/s3/) as a third party file -hosting site. The primary use case is a destination for Filepicker files. The -software won't directly need any S3 information for this use case; it will be -provided directly to Filepicker. - -#### for Static File hosting -A secondary use case for S3 is hosting static files. The software will need to -update static files on the S3 bucket. In this case, the software will need the -S3 bucket name, access key, and secret key. - -The code assumes S3 is used for static files in a production environment. To -obviate the need for hosting static files through S3 (noting that it still might -be necessary for Filepicker), a workaround was explained [in this Github ticket](https://github.com/FinalsClub/karmaworld/issues/192#issuecomment-30193617). - -That workaround is repeated here. Make the following changes to -`{project_root}/karmaworld/settings/prod.py`: - -1. comment out everything about static_s3 from imports -2. comment out storages from the `INSTALLED_APPS` -3. change `STATIC_URL` to `'/assets/'` -4. comment out the entire storages section (save for part of `INSTALLED_APPS` and `STATIC_URL`) -5. add this to the nginx config: - - location /assets/ { - root /var/www/karmaworld/karmaworld/; - } - -### IndexDen -KarmaNotes uses IndexDen to create a searchable index of all the notes -in the system. Create an free IndexDen account at [their homepage](http://indexden.com/). -You will be given a private URL that accesses your IndexDen account. -Create a file at karmaworld/secret/indexden.py, and enter your private URL, and the name -of the index you want KarmaNotes to use. The index will be created automatically when -KarmaNotes is run if it doesn't already exist. For example, -``` -PRIVATE_URL = 'http://:secretsecret@secret.api.indexden.com' -INDEX = 'karmanotes_something_something' -``` - -### Google Drive -This software uses [Google Drive](https://developers.google.com/drive/) to -convert documents to and from various file formats. +Two, separate buckets will be needed in production: one for static file hosting +and one as a communication bus with Filepicker. -A Google Drive service account with access to the Google Drive is required. Thismay be done with a Google Apps account with administrative privileges, or ask -your business sysadmin. +This software uses S3 to store files which are sent to or received +from Filepicker. Filepicker will need to know the S3 bucket name, access key, +and secret key. -These are the instructions to create a Google Drive service account: -https://developers.google.com/drive/delegation +Filepicker users can only make use of an S3 bucket with a paid account. For +development purposes, no Filepicker S3 bucket is needed. Skip all references to +the Filepicker S3 bucket in the development case. -When completed, you'll have a file called `client_secrets.json` and a p12 file -which is the key to access the service account. Both are needed by the software. +The software will not need to know the S3 credentials for the Filepicker +bucket, because the software will upload files to the Filepicker S3 bucket +through Filepicker's API and it will link to or download files from the +Filepicker S3 bucket through Filepicker's URLs. This will be covered in the +Filepicker section below. -### Twitter +This software uses S3 for hosting static files. The software will need to +update static files on the S3 bucket. As such, the software will need the +S3 bucket name, access key, and secret key via the environment variables. This +is described in subsections below. -Twitter is used to post updates about new courses. Access to the Twitter API -will be required for this task. +To support static hosting, `DEFAULT_FILE_STORAGE` should be set to +`'storages.backends.s3boto.S3BotoStorage'`, unless there is a compelling reason +to change it. -If this Twitter feature is desired, the consumer key and secret as well as the -access token key and secret are needed by the software. +There are three ways to setup access to the S3 buckets depending upon speed +and security. The more secure, the slower it will be to setup. -If the required files are not found, then no errors will occur. +#### insecure S3 access +For quick and dirty insecure S3 access, create a single group and a single user +with full access to all buckets. Full access to all buckets is insecure! -To set this up, create a new Twitter application at https://dev.twitter.com/apps/new. -Make sure this application has read/write access. Generate an access token. Go to your -OAuth settings, and grab the "Consumer key", "Consumer secret", "Access token", and -"Access token secret". +Create an +[Amazon IAM group](http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_CreatingAndListingGroups.html) +with full access to the S3 bucket. Select the "Amazon S3 Full Accesss" Policy +Template. -Create a file at karmaworld/secret/twitter.py, and enter these tokens. For example, -``` -CONSUMER_KEY = '???' -CONSUMER_SECRET = '???' -ACCESS_TOKEN_KEY = '???' -ACCESS_TOKEN_SECRET = '???' -``` +Create an +[Amazon IAM user](http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_SettingUpUser.html). +Copy the credentials into the `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` +environment variables. Be sure to write down the access information, as it +will only be shown once. -### SSL Certificate +#### secure S3 access +For secure S3 access, two users will be needed. One with access to the +Filepicker bucket and one with access to the static hosting bucket. -If you wish to host your system publicly, you'll need an SSL certificate -signed by a proper authority. +Note: this might need to be modified to prevent creation and deletion of +buckets? -If you are working on local system for development, a self signed certificate -will suffice. There are plenty of resources available for learning how to -create one, so that will not be detailed here. Note that the Vagrant file will -automatically generated a self signed certificate within the virtual machine. +Create an +[Amazon IAM group](http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_CreatingAndListingGroups.html) +with full access to the S3 bucket. The quick way is to select the +"Amazon S3 Full Accesss" Policy Template and replace `"Resource": "*"` with +`"Resource": "arn:aws:s3:::"`. -The certificate should be installed using nginx. +Create an +[Amazon IAM user](http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_SettingUpUser.html). +Copy the credentials into the `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` +environment variables. Be sure to write down the access information, as it +will only be shown once. -# Development Install +Ensure the created user is a member of the group with access to the S3 +static files bucket. -If you need to setup the project for development, it is highly recommend that -you grab create a development virtual machine or (if available) grab one that -has already been created for your site. +Repeat the process again, creating a group for the Filepicker bucket and +creating a user with access to that group. These credentials will be passed +on to Filepicker. -The *host machine* is the system which runs e.g. VirtualBox, while the -*virtual machine* refers to the system running inside e.g. VirtualBox. +#### somewhat secure S3 access +Create two groups as described in the `secure S3 access` section above. -## Creating a Virtual Machine by hand +Create a single user, save the credentials as described in the +`insecure S3 access` section above, and pass the credentials on to Filepicker. -Create a virtual machine with your favorite VM software. Configure the virtual -machine for production with the steps shown in the [Production Install](#production-install) section. +Add the single user to both groups. -## Creating a Virtual Machine with Vagrant +This is less secure because if your web server or Filepicker get compromised +(so there are two points for potential failure), the single compromised +user has full access to both buckets. -Vagrant supports a variety of virtual machine software and there is additional -support for Vagrant to deploy to a wider variety. However, for these -instructions, it is assumed Vagrant will be deployed to VirtualBox. +### Amazon Cloudfront CDN +[Cloudfront CDN](http://aws.amazon.com/cloudfront/) assists static file hosting. -1. Configure external dependencies on the host machine: - * Under `{project_root}/karmaworld/secret/`: - 1. Copy files with the example extension to the corresponding filename - without the example extension (e.g. - `cp filepicker.py.example filepicker.py`) - 1. Modify those files, but ignore `db_settings.py` (Vagrant takes care of that one) - 1. Copy the Google Drive service account p12 file to `drive.p12` - (this filename and location may be changed in `drive.py`) - 1. Ensure `*.py` in `secret/` are never added to the git repo. - (.gitignore should help warn against taking this action) +Follow +[Amazon's instructions](http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/GettingStarted.html) +to host static files out of the appropriate S3 bucket. Note that Django's static +file upload process has been modified to mark static files as publicly +assessible. -1. Install [VirtualBox](http://www.virtualbox.org/) +In the settings for the Cloudfront Distribution, copy the "Domain Name" from +General settings and set `CLOUDFRONT_DOMAIN` to it. For example, `abcdefghij.cloudfront.net`. -1. Install [vagrant](http://www.vagrantup.com/) 1.3 or higher +### Amazon Mechanical Turk +Mechanical turk is employed to generate human feedback from uploaded notes. +This service is helpful for generating flash cards and quizzes. -1. Use Vagrant to create the virtual machine. - * While in `cd {project_root}`, type `vagrant up` +This service is optional and it might cause unexpected charges when +deployed. If the required environment variable is not found, +then no errors will occur and no mechanical turk tasks will be created, avoiding any unexpected +costs. -1. Connect to the virtual machine with `vagrant ssh` +The `MTURK_HOST` environment variable is almost certainly +`"mechanicalturk.amazonaws.com"`. -Note: -Port 443 of the virtual machine will be configured as port 6659 on the host -system. While on the host system, fire up your favorite browser and point it at -`https://localhost:6659/`. This connects to your host system on port 6659, which -forwards to your virtual machine's web site using SSL. +The code will create and publish HITs on your behalf. -Port 80 of the virtual machine will be configured as port 16659 on the host -system. While on the host system, fire up your favorite browser and point it at -`http://localhost:16659/`. This connects to your host system on port 16659, -which forwards to your virtual machine's web site using plain text. +### Google Drive +This software uses [Google Drive](https://developers.google.com/drive/) to +convert documents to and from various file formats. -## Completing the Virtual Machine with Fabric - -*Notice* Fabric might not run properly if you presently in a virtualenv. -`deactivate` prior to running fab commands. - -### From the Host Machine - -If Fabric is available on the host machine, you should be able to run Fabric -commands directly on the host machine, pointed at the virtual machine. If -Fabric is not available on the Host Machine, see the next section. - -To setup the host machine properly, see the section about -[accessing the VM via fabric](#accessing-the-vm-via-fabric) and then return to -this section. - -Assuming those steps were followed with the alias, the following instructions -should complete the virtual machine setup: - -1. `cd {project_root}` on the host machine. - -1. type `vmfab first_deploy`. - -### From within the Virtual Machine - -If Fabric is not available on the host machine, or just for funsies, you may -run the Fabric commands within the virtual machine. - -1. Connect to the virtual machine with `vagrant ssh`. - -1. On the virtual machine, type `cd karmanotes` to get into the code - repository. - -1. In the code repo of the VM, type `fab -H 127.0.0.1 first_deploy` - - During this process, you will be queried to create a Django site admin. - Provide information. You will be asked to remove duplicate schools. Respond - with yes. - -# Production Install - -These steps are taken care of by automatic utilities. Vagrant performs the -first subsection of these instructions and Fabric performs the second -subsection. These instructions are detailed here for good measure, but should -not generally be needed. - -1. Ensure the following are installed: - * `git` - * `7zip` (for unzipping US Department of Education files) - * `PostgreSQL` (server and client) - * `nginx` - * `libxslt` and `libxml2` (used by some Python libraries) - * `RabbitMQ` (server) - * `memcached` - * `Python` - * `PIP` - * `virtualenv` - * `virtualenvwrapper` (might not be needed anymore) - * `pdf2htmlEX` - - On a Debian system supporting Apt, this can be done with: -``` - sudo apt-get install python-pip postgresql python-virtualenv nginx \ - virtualenvwrapper git libxml2-dev p7zip-full libffi-dev \ - postgresql-server-dev-9.1 libxslt1-dev \ - libmemcached-dev python-dev rabbitmq-server \ - cmake libpng-dev libjpeg-dev libgtk2.0-dev \ - pkg-config libfontconfig1-dev autoconf libtool - - wget http://poppler.freedesktop.org/poppler-0.24.4.tar.xz - tar xf poppler-0.24.4.tar.xz - cd poppler-0.24.4 - ./configure --prefix=/usr --enable-xpdf-headers - make - sudo make install - cd ~/ - - git clone https://github.com/fontforge/fontforge.git - cd fontforge - ./bootstrap - ./configure --prefix=/usr - make - sudo make install - cd ~/ - - git clone https://github.com/charlesconnell/pdf2htmlEX.git - cd pdf2htmlEX - cmake . - make - sudo make install -``` - -1. Generate a PostgreSQL database and a role with read/write permissions. - * For Debian, these instructions are helpful: https://wiki.debian.org/PostgreSql - -1. Modify configuration files. - * There are settings in `{project_root}/karmaworld/settings/prod.py` - * Most of the setting should work fine by default. - * There are additional configuration options for external dependencies - under `{project_root}/karmaworld/secret/`. - 1. Copy files with the example extension to the corresponding filename - without the example extension (e.g. - `cp filepicker.py.example filepicker.py`) - 1. Modify those files. - * Ensure `PROD_DB_USERNAME`, `PROD_DB_PASSWORD`, and `PROD_DB_NAME` - inside `db_settings.py` match the role, password, and database - generated in the previous step. - 1. Copy the Google Drive service account p12 file to `drive.p12` - (this filename and location may be changed in `drive.py`) - 1. Ensure `*.py` in `secret/` are never added to the git repo. - (.gitignore should help warn against taking this action) - -1. Make sure that /var/www exists, is owned by the www-data group, and that - the desired user is a member of the www-data group. - -1. Configure nginx with a `proxy_pass` to port 8000 (or whatever port gunicorn - will be running the site on) and any virtual hosting that is desired. - Here is an example server file to put into `/etc/nginx/sites-available/` - - server { - listen 80; - server_name localhost; - return 301 https://$host$request_uri; - } - - server { - listen 443; - ssl on; - server_name localhost; - client_max_body_size 20M; - - location / { - # pass traffic through to gunicorn - proxy_pass http://127.0.0.1:8000; - # pass HTTP(S) status through to Django - proxy_set_header X-Forwarded-SSL $https; - proxy_set_header X-Forwarded-Protocol $scheme; - proxy_set_header X-Forwarded-Proto $scheme; - # pass nginx site back to Django - proxy_set_header Host $http_host; - } - } - -1. Configure the system to start supervisor on boot. An init script for - supervisor is in the repo at `{project_root}/karmaworld/confs/supervisor`. - `update-rc.d supervisor defaults` is the Debian command to load the init - script into the correct directories. - -1. Make sure `{project_root)/var/log` and `{project_root}/var/run` exist and - may be written to, or else put the desired logging and run file paths into - `{project_root}/confs/prod/supervisord.conf` - -1. Create a virtualenv under `/var/www/karmaworld/venv` - -1. Change into the virtualenv with `. /var/www/karmaworld/venv/bin/activate`. - Within the virtualenv: - - 1. Update the Python depenencies with `pip -i {project_root}/reqs/prod.txt` - * If you want debugging on a production-like system: - 1. run `pip -i {project_root}/reqs/vmdev.txt` - 1. change `{project_root}/manage.py` to point at `vmdev.py` - instead of `prod.py` - 1. ensure firefox is installed on the system (such as by - `sudo apt-get install firefox`) - - 1. Setup the database with `python {project_root}/manage.py syncdb --migrate` - - 1. Collect static resources and put them in the static hosting location with - `python {project_root}/manage.py collect_static` - -1. The database needs to be populated with schools. A list of accredited schools - may be found on the US Department of Education website: - http://ope.ed.gov/accreditation/GetDownloadFile.aspx - - Alternatively, use the built-in scripts while in the virtualenv: +A Google Drive service account with access to the Google Drive is required. +This may be done with a Google Apps account with administrative privileges, or ask +your business sysadmin. - 1. Fetch USDE schools with - `python {project_root}/manage.py fetch_usde_csv ./schools.csv` +Follow [Google's instructions](https://developers.google.com/drive/delegation) +to create a Google Drive service account. - 1. Upload the schools into the database with - `python {project_root}/manage.py import_usde _csv ./schools.csv` +Convert the p12 file into a Base64 encoded string for the +`GOOGLE_SERVICE_KEY_BASE64` environment variable. There are many ways to do +this. If Python is available, the +[binascii library](https://docs.python.org/2/library/binascii.html#binascii.b2a_base64) +makes this very easy: - 1. Clean up redundant information with - `python {project_root}/manage.py sanitize_usde_schools` + import binascii + with open('file.p12', 'r') as f: + print binascii.b2a_base64(f.read) -1. Startup `supervisor`, which will run `celery` and `gunicorn`. This may be - done from within the virtualenv by typing - `python {project_root}/manage.py start_supervisord` +Copy the contents of `client_secret_*.apps.googleusercontent.com.json` into the +`GOOGLE_CLIENT_SECRETS` environment variable. -1. If everything went well, gunicorn should be running the website on port 8000 - and nginx should be serving gunicorn on port 80. +### Filepicker +This software uses [Filepicker.io](https://www.inkfilepicker.com/) for uploading +files. This requires an account with Filepicker. -# Update a deployed system +Filepicker can use an additional third party file hosting site where it may +send uploaded files. This project, in production, uses Amazon S3 as the third +party. See the Amazon S3 section above for more information. -Once code has been updated, the running web service will need to be updated -to stay in sync with the code. +Create a new App with Web SDK and provide the Heroku App URL for the +Application's URL. You'll be given an API Key for the App. Paste this into the +`FILEPICKER_API_KEY` environment variable. -## Fabric +Find the 'App Security' button on the left hand side of the web site. Make sure +'Use Security' is enabled. Generate a new secret key. Paste this key into the +`FILEPICKER_SECRET` environment variable. -Run the `deploy` fab command. For example: -`fab -H 127.0.0.1 deploy` +If you have an upgraded plan, you can configure Filepicker to have access to +your Filepicker S3 bucket. Click 'Amazon S3' on the left hand side menu and +supply the credentials for the user with access to the Filepicker S3 bucket. -## By Hand +### IndexDen +KarmaNotes uses IndexDen to create a searchable index of all the notes in the +system. Create an free IndexDen account at +[their homepage](http://indexden.com/). You will be given a private URL that +accesses your IndexDen account. This URL is visible on your dashboard (you +might need to scroll down). -1. pull code in from the repo with `git pull` -1. If any Python requirements have changed, install/upgrade them: - `pip install -r --upgrade reqs/prod.txt` -1. If the database has changed, update the database with: - `python manage.py syncdb --migrate` -1. If any static files have changed, synchornize them with; - `python manage.py collectstatic` -1. Django will probably need a restart. - * For a dev system, ctrl-c the running process and restart it. - * For a production system, there are two options. - * `python manage.py restart_supervisord` if far reaching changes - have been made (that might effect celery, beat, etc) - * `python manage.py restart_gunicorn` if only minor Django changes - have been made - * If you are uncertain, best bet is to restart supervisord. +Set the `INDEXDEN_PRIVATE_URL` environment variable to your private URL. -# Accessing the Vagrant Virtual Machine +Set the `INDEXDEN_INDEX` environment variable to the name of the index you want +to use for KarmaNotes. The index will be created automatically when KarmaNotes +is run if it doesn't already exist. It may be created through the GUI if +desired. -## Accessing the VM via Fabric -If you have Fabric on the host machine, you can configure your host machine -to run Fabric against the virtual machine. +### Twitter -You will need to setup the host machine with the proper SSH credentials to -access the virtual machine. This is done by running `vagrant ssh-config` from -`{project_root}` and copying the results into your SSH configuration file -(usually found at `~/.ssh/config`). This can be done more simply by typing this -on the host machine: +Twitter is used to post updates about new courses. Access to the Twitter API +will be required for this task. - vagrant ssh-config --host karmavm >> ~/.ssh/config +If this Twitter feature is desired, the consumer key and secret as well as the +access token key and secret are needed by the software. -The VM will, by default, route its SSH connection through localhost port 2222 -on the host machine and the base user with be vagrant. Point Fabric there when -running fab commands from `{project_root}`. So the command will look like this: +If the required environment variables are not found, then no errors will occur +and no tweets will be posted. - fab -H karmavm +To set this up, +[create a new Twitter application](https://dev.twitter.com/apps/new). +Use your Heroku App URL for the website field. Leave the Callback field blank. -In unix, it might be convenient to create and use an alias like so: +Make sure this application has read/write access. Generate an access token. Go +to your OAuth settings, and grab the "Consumer key", "Consumer secret", +"Access token", and "Access token secret". Paste these, respectively, into the +environment variables `TWITTER_CONSUMER_KEY`, `TWITTER_CONSUMER_SECRET`, +`TWITTER_ACCESS_TOKEN_KEY`, `TWITTER_ACCESS_TOKEN_SECRET`. - alias vmfab='fab -H karmavm' - vmfab +### SSL Certificate -Removing a unix alias is done with `unalias`. +If you wish to host your system publicly, you'll need an SSL certificate +signed by a proper authority. -## Connecting to the VM via SSH -If you have installed a virtual machine using `vagrant up`, you can connect -to it by running `vagrant ssh` from `{project_root}`. +Follow [Heroku's SSL setup](https://devcenter.heroku.com/articles/ssl-endpoint) +to get SSL running on your server. -## Connecting to the development website on the VM -To access the website running on the VM, point your browser at -http://localhost:6659/ using your host computer. +You may set the `SSL_REDIRECT` environment variable to `true` to make KarmaNotes +redirect insecure connections to secure ones. -Port 6659 on your local machine is set to forward to the VM's port 80. +# Local Install -Fun fact: 6659 was chosen because of OM (sanskrit) and KW (KarmaWorld) on a -phone: 66 59. +KarmaNotes is a Heroku application. Download the [Heroku toolbelt](https://toolbelt.heroku.com/). -## Updating the VM code repository -Once connected to the virtual machine by SSH, you will see `karmaworld` in -the home directory. That is the `{project_root}` in the virtual machine. +Before your running it for the first time, there are +a few setup steps: + 1. `virtualenv venv` + 1. `source venv/bin/activate` + 1. `pip install -r requirements.txt` + 1. `pip install -r requirements-dev.txt` + 1. `foreman run python manage.py syncdb --migrate --noinput` + 1. `foreman run python manage.py createsuperuser` + 1. `foreman run python manage.py fetch_usde_csv ./schools.csv` + 1. `foreman run python manage.py import_usde _csv ./schools.csv` + 1. `foreman run python manage.py sanitize_usde_schools` -`cd karmaworld` and then use `git fetch; git merge` and/or `git pull origin` as -desired. +To run KarmaNotes locally, make sure you are inside your +virtual environment (`source venv/bin/activate`) and run `foreman start`. +Press ctrl-C to kill foreman. Foreman will run Django's runserver command. +If you wish to have more control over how this is done, you can do +`foreman run python manage.py runserver `. For running any other +`manage.py` commands, you should also precede them with `foreman run` like just shown. +This simply ensures that the environment variables from `.env` are present. -The virtual machine's code repository is set to use your host machine's -local repository as the origin. So if you make changes locally and commit them, -without pushing them anywhere, your VM can pull those changes in for testing. +# Heroku Install -This may seem like duplication. It is. The duplication allows your host machine -to maintain git credentials and manage repository access control so that your -virtual machine doesn't need sensitive information. Your virtual machine simply -pulls from the local repository on your local file system without needing -credentials, etc. +KarmaNotes is a Heroku application. Download the [Heroku toolbelt](https://toolbelt.heroku.com/). -## Deleting the Virtual Machine -If you want to start a fresh virtual machine or simply remove the virtual -machine from your hard drive, Vagrant has a command for that. While in -`{project_root}` of the host system, type `vagrant destroy` and confirm with -`y`. This will remove the VM from your hard drive. +To run KarmaNotes on Heroku, do `heroku create` and `git push heroku master` as typical +for a Heroku application. Set your the variable `BUILDPACK_URL` to +`https://github.com/FinalsClub/heroku-buildpack-karmanotes` to use a buildpack +designed to support KarmaNotes. -If you wanted a fresh VM, the next step is to run `vagrant up`, which will -start a brand new VM (since the old one is gone). +You will need to import the US Department of Education's list of accredited schools. + 1. Fetch USDE schools with + `heroku run python manage.py fetch_usde_csv ./schools.csv` + 1. Upload the schools into the database with + `heroku run python /manage.py import_usde _csv ./schools.csv` + 1. Clean up redundant information with + `heroku run python /manage.py sanitize_usde_schools` -## Other Vagrant commands -Please see [vagrant documentation](http://docs.vagrantup.com/v2/cli/index.html) -for more information on how to use the vagrant CLI to manage your development -VM. # Django Database management @@ -491,9 +314,9 @@ VM. We have setup Django to use [south](http://south.aeracode.org/wiki/QuickStartGuide) for migrations. When changing models, it is important to run -`python {project_root}/manage.py schemamigration` which will create a migration +`foreman run python manage.py schemamigration` which will create a migration to reflect the model changes into the database. These changes can be pulled -into the database with `python {project_root}/manage.py migrate`. +into the database with `foreman run python manage.py migrate`. Sometimes the database already has a migration performed on it, but that information wasn't told to south. There are subtleties to the process which @@ -505,15 +328,13 @@ flag. A number of assets have been added to the repository which come from external sources. It would be difficult to keep a complete list in this README and keep it up to date. Software which originally came from outside parties can -generally be found in `{project_root}/karmaworld/assets`. +generally be found in `karmaworld/assets`. Additionally, all third party Python projects (downloaded and installed with pip) are listed in these files: -* `{project_root}/reqs/common.txt` -* `{project_root}/reqs/dev.txt` -* `{project_root}/reqs/prod.txt` -* `{project_root}/reqs/vmdev.txt` (just a combo of dev.txt and prod.txt) +* `requirements.txt` +* `requirements-dev.txt` # Thanks