From: Seth Woodworth Date: Thu, 17 Jan 2013 18:38:52 +0000 (-0500) Subject: import gdrive.py and the ajaxuploader app X-Git-Tag: release-20150131~557^2~7 X-Git-Url: https://git.librecmc.org/?a=commitdiff_plain;h=1597982579621ce7bc071c9fa8f478726fc13803;p=oweals%2Fkarmaworld.git import gdrive.py and the ajaxuploader app --- diff --git a/karmaworld/apps/ajaxuploader/README.md b/karmaworld/apps/ajaxuploader/README.md new file mode 100644 index 0000000..4027161 --- /dev/null +++ b/karmaworld/apps/ajaxuploader/README.md @@ -0,0 +1,329 @@ +`django-ajax-uploader` provides a useful class you can use to easily implement ajax uploads. + +It uses valum's great uploader: https://github.com/valums/file-uploader, and draws heavy inspiration and some code from +https://github.com/alexkuhl/file-uploader + +In short, it implements a callable class, `AjaxFileUploader` that you can use to handle uploads. By default, `AjaxFileUploader` assumes you want to upload to local storage, but you can select any other backend if desired or write your own (see backends section below). Pull requests welcome! + +Updates +======= + +Version 0.2.1 is released, and contains: + +* JSON parsing of `extra_context` now properly handles datetimes. (Thanks to onyxfish) + + +Version 0.2 is released, and contains: + +* Optional `fileLimit` param for the uploader, to limit the number of allowed files. (Thanks to qnub) +* fhahn's `default_storage` backend. + + +Version 0.1.1 is released, and contains: + +* Support for a CouchDB backend +* A backwards-incompatible change to the location of the ajaxuploader static files. I try to avoid backwards incompatibilities, but since /js and /css are the proper conventions and the lib is relatively young, it seemed better to get things right now, instead of waiting. The static files are now at: + * `{{STATIC_URL}}ajaxuploader/js/fileuploader.js` + * `{{STATIC_URL}}ajaxuploader/css/fileuploader.css` + + +Usage +===== +Step 1. Install django-ajax-uploader. +------------------------------------- +It's in pypi now, so simply: + +- `pip install ajaxuploader` + +You may also need to install backend-specific dependences. + + - For the S3 backend, you will need [boto](https://github.com/boto/boto). ( `pip install boto` ) + - For the MongoDB GridFS backend, you will need [pymongo](https://github.com/AloneRoad/pymongo) ( `pip install pymongo` ) + +Step 2. (Django 1.3 only) +------------------------- +For Django 1.3 you will need to have the app in your installed apps tuple for collect static to pick up the files. + +First Add 'ajaxuploader' to you installed apps in settings.py + +``` +INSTALLED_APPS = ( + ... + "ajaxuploader", +) +``` + +Then: + +``` +$ python manage.py collectstatic +``` + +Step 3. Include it in your app's views and urls. +------------------------------------------------ +You'll need to make sure to meet the csrf requirements to still make valum's uploader work. Code similar to the following should work: + +views.py + +```python +from django.middleware.csrf import get_token +from django.shortcuts import render_to_response +from django.template import RequestContext + +from ajaxuploader.views import AjaxFileUploader + + +def start(request): + csrf_token = get_token(request) + return render_to_response('import.html', + {'csrf_token': csrf_token}, context_instance = RequestContext(request)) + +import_uploader = AjaxFileUploader() +``` + +urls.py + +``` +url(r'start$', views.start, name="start"), +url(r'ajax-upload$', views.import_uploader, name="my_ajax_upload"), +``` + +Step 4. Set up your template. +----------------------------- +This sample is included in the templates directory, but at the minimum, you need: + +```html + + + + + + + + +
+ +
+ + +``` + +Backends +======== + +`django-ajax-uploader` can put the uploaded files into a number of places, and perform actions on the files uploaded. Currently, +there are backends available for local storage (default), Amazon S3, MongoDB (GridFS), CouchDB, and a locally stored image +thumbnail backend. Creating a custom backend is fairly straightforward, and pull requests are welcome. + +Built-in Backends +------------------ + +`django-ajax-uploader` has the following backends: + +### local.LocalUploadBackend ### + +Stores the file locally, by default to `{MEDIA_ROOT}/uploads`. + +Requirements: + +* None + +Settings: + +* `UPLOAD_DIR` : The directory to store the uploaded file in, within `MEDIA_ROOT`. Defaults to "uploads". +* `BUFFER_SIZE`: The size of each chunk to write. Defaults to 10 MB. See the caveat at the bottom before changing it. + +Context returned: + +* `path`: The full media path to the uploaded file. + + +### mongodb.MongoDBUploadBackend ### + +Stores the file in MongoDB via GridFS + +Requirements + +* [pymongo](https://github.com/AloneRoad/pymongo) + +Settings: + +* `AJAXUPLOAD_MONGODB_HOST`: Specify the host of your MongoDB server. Defaults to localhost if not specified. +* `AJAXUPLOAD_MONGODB_PORT`: Specify the port of your MongoDB server. Defaults to 27017 if not specified. + +Arguments + +* db (required): Specify the database within MongoDB you wish to use +* collection (optional): Specify the collection within the db you wish to use. This is optional and will default to `fs` if not specified + + +Context returned: + +* None + + +### couch.CouchDBUploadBackend ### + +Stores the file in a CouchDB backend + +Requirements + +* [couchdb](http://code.google.com/p/couchdb-python/) + +Settings: + +* `AJAXUPLOAD_COUCHDB_HOST`: Specify the host of your CouchDB server. Defaults to `http://localhost:5984` if not specified. + +Arguments + +* db (required): Specify the database within CouchDB you wish to use + + +Context returned: + +* None + + +### s3.S3UploadBackend ### + +Stores the file in Amazon's S3. + +Requirements: + +* [boto](https://github.com/boto/boto) + +Settings: + +* `NUM_PARALLEL_PROCESSES` : Uploads to Amazon are parallelized to increase speed. If you have more cores and a big pipe, increase this setting for better performance. Defaults to 4. +* `BUFFER_SIZE`: The size of each chunk to write. Defaults to 10 MB. + +Context returned: + +* None + + +### thumbnail.ThumbnailUploadBackend ### + +Stores a thumbnail of the locally, optionally discarding the upload. Subclasses `LocalUploadBackend`. + +Requirements: + +* [sorl-thumbnail](https://github.com/sorl/sorl-thumbnail) + +Settings: + +* `DIMENSIONS` : A string of the dimensions (WxH) to resize the uploaded image to. Defaults to "100x100" +* `KEEP_ORIGINAL`: Whether to keep the originally uploaded file. Defaults to False. +* `BUFFER_SIZE`: The size of each chunk to write. Defaults to 10 MB. + +Context returned: + +* `path`: The full media path to the uploaded file. + + +### default_storage.DefaultStorageUploadBackend ### + +This backend uses Django's default storage backend (defined by the DEFAULT_FILE_STORAGE setting) to save the uploaded files. + +Requirements: + +* None + +Settings: + +* `UPLOAD_DIR` : The directory to store the uploaded file in, within `MEDIA_ROOT`. Defaults to "uploads". +* `BUFFER_SIZE`: The size of each chunk to write. Defaults to 10 MB. See the caveat at the bottom before changing it. + +Context returned: + +* `path`: The full media path to the uploaded file. + + +Backend Usage +------------------------ + +The default backend is `local.LocalUploadBackend`. To use another backend, specify it when instantiating `AjaxFileUploader`. + +For instance, to use `MongoDBUploadBackend`: + +views.py + +```python +from ajaxuploader.views import AjaxFileUploader +from ajaxuploader.backends.mongodb import MongoDBUploadBackend + +... +import_uploader = AjaxFileUploader(backend=MongoDBUploadBackend, db='uploads') +``` + +To set custom parameters, simply pass them along with instantiation. For example, for larger thumbnails, preserving the originals: +views.py + + from ajaxuploader.backends.thumbnail import ThumbnailUploadBackend + + ... + import_uploader = AjaxFileUploader(backend=ThumbnailUploadBackend, DIMENSIONS="500x500", KEEP_ORIGINAL=True) + + +Custom Backends +------------- + +To write a custom backend, simply inherit from `backends.base.AbstractUploadBackend` and implement the `upload_chunk` method. All possible methods to override are described below. + +* `upload_chunk` - takes a string, and writes it to the specified location. +* `setup`: takes the original filename, does all pre-processing needed before uploading the file (for example, for the S3 backend, this method is used to establish a connection with the S3 server). +* `update_filename`: takes the `request` object and the original name of the file being updated, can return a new filename which will be used to refer to the file being saved. If undefined, the uploaded filename is used. If not overriden by `upload_complete`, this value will be returned in the response. +* `upload_complete`: receives the `request` object and the filename post `update_filename` and does any cleanup or manipulation after the upload is complete. (Examples: cropping the image, disconnecting from the server). If a dict is returned, it is used to update the response returned to the client. + + +Caveats +======= +`BUFFER_SIZE` - some users have reported problems using smaller buffer sizes. I also saw random failed uploads with very small sizes like 32k. 10MB has been completely reliable for me, and in what I've read here and there, so do some testing if you want to try a different value. Note that this doesn't have a big impact on the overall upload speed. + + +Credits +======= +Original implementation and ongoing maintenance by [skoczen](https://github.com/skoczen), courtesy of [GoodCloud](https://www.agoodcloud.com). +Most of the backend abstraction was written by [chromano](https://github.com/chromano) and [shockflash](https://github.com/shockflash). +MongoDB support and saner defaults by [chrisjones-brack3t](https://github.com/chrisjones-brack3t). +Threadsafe improvements and bugfixes by [dwaiter](https://github.com/dwaiter). +CouchDB support by [paepke](https://github.com/paepke). +Default Storage backend by [fhahn](https://github.com/fhahn). +File number limit in upload by [qnub](https://github.com/qnub). +JSON parsing improvements by [onyxfish](https://github.com/onyxfish). + +This code began as such a trivial layer on top of [valum's uploader](http://valums.com/ajax-upload/), [boto](https://github.com/boto/boto), and [alex's ideas](http://kuhlit.blogspot.com/2011/04/ajax-file-uploads-and-csrf-in-django-13.html) it's silly. However, I didn't find any implementations that *just worked*, so hopefully it's useful to someone else. I also drew from these sources: + +* http://www.topfstedt.de/weblog/?p=558 +* http://www.elastician.com/2010/12/s3-multipart-upload-in-boto.html +* https://github.com/valums/file-uploader +* https://github.com/alexkuhl/file-uploader + +Many thanks to all for writing such helpful and readable code! diff --git a/karmaworld/apps/ajaxuploader/backends/__init__.py b/karmaworld/apps/ajaxuploader/backends/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/karmaworld/apps/ajaxuploader/backends/base.py b/karmaworld/apps/ajaxuploader/backends/base.py new file mode 100644 index 0000000..295912c --- /dev/null +++ b/karmaworld/apps/ajaxuploader/backends/base.py @@ -0,0 +1,38 @@ +class AbstractUploadBackend(object): + BUFFER_SIZE = 10485760 # 10MB + + def __init__(self, **kwargs): + self.__dict__.update(kwargs) + + def setup(self, filename): + """Responsible for doing any pre-processing needed before the upload + starts.""" + + def update_filename(self, request, filename): + """Returns a new name for the file being uploaded.""" + + def upload_chunk(self, chunk): + """Called when a string was read from the client, responsible for + writing that string to the destination file.""" + raise NotImplementedError + + def upload_complete(self, request, filename): + """Overriden to performs any actions needed post-upload, and returns + a dict to be added to the render / json context""" + + def upload(self, uploaded, filename, raw_data): + try: + if raw_data: + # File was uploaded via ajax, and is streaming in. + chunk = uploaded.read(self.BUFFER_SIZE) + while len(chunk) > 0: + self.upload_chunk(chunk) + chunk = uploaded.read(self.BUFFER_SIZE) + else: + # File was uploaded via a POST, and is here. + for chunk in uploaded.chunks(): + self.upload_chunk(chunk) + return True + except: + # things went badly. + return False diff --git a/karmaworld/apps/ajaxuploader/backends/default_storage.py b/karmaworld/apps/ajaxuploader/backends/default_storage.py new file mode 100644 index 0000000..28bc1ca --- /dev/null +++ b/karmaworld/apps/ajaxuploader/backends/default_storage.py @@ -0,0 +1,33 @@ +import os + +from django.core.files.storage import default_storage +from django.core.files.base import ContentFile +from django.template.loader import render_to_string + +from ajaxuploader.backends.base import AbstractUploadBackend + + +class DefaultStorageUploadBackend(AbstractUploadBackend): + """ + Uses Django's default storage backend to store the uploaded files + see https://docs.djangoproject.com/en/dev/topics/files/#file-storage + """ + + UPLOAD_DIR = 'uploads' + + def setup(self, filename): + # join UPLOAD_DIR with filename + new_path = os.path.join(self.UPLOAD_DIR, filename) + + # save empty file in default storage with path = new_path + self.path = default_storage.save(new_path, ContentFile('')) + + # create BufferedWriter for new file + self._dest = default_storage.open(self.path, mode='wb') + + def upload_chunk(self, chunk): + self._dest.write(chunk) + + def upload_complete(self, request, filename): + self._dest.close() + return {"path": self.path} diff --git a/karmaworld/apps/ajaxuploader/backends/local.py b/karmaworld/apps/ajaxuploader/backends/local.py new file mode 100644 index 0000000..38ea8f5 --- /dev/null +++ b/karmaworld/apps/ajaxuploader/backends/local.py @@ -0,0 +1,119 @@ +from io import FileIO, BufferedWriter +import os + +from django.conf import settings +from django.contrib.auth.models import User + +from ajaxuploader.backends.base import AbstractUploadBackend + +# Requires the KarmanNotes project +from notes.models import File +from notes import tasks +from KNotes import settings as KarmaSettings + +class LocalUploadBackend(AbstractUploadBackend): + #UPLOAD_DIR = "uploads" + # The below key must be synchronized with the implementing project + # Used to store an array of unclaimed file_pks in the django session + # So they can be claimed later when the anon user authenticates + SESSION_UNCLAIMED_FILES_KEY = KarmaSettings.SESSION_UNCLAIMED_FILES_KEY + + # When a file is uploaded anonymously, + # What username should we assign ownership to? + # This is important because File.save + # behavior will not set awarded_karma to True + # until an owner is assigned who has username != this + DEFAULT_UPLOADER_USERNAME = KarmaSettings.DEFAULT_UPLOADER_USERNAME + + def setup(self, filename): + self._path = os.path.join( + settings.MEDIA_ROOT, filename) + try: + os.makedirs(os.path.realpath(os.path.dirname(self._path))) + except: + pass + self._dest = BufferedWriter(FileIO(self._path, "w")) + + def upload_chunk(self, chunk): + self._dest.write(chunk) + + def upload(self, uploaded, filename, raw_data): + try: + if raw_data: + # File was uploaded via ajax, and is streaming in. + chunk = uploaded.read(self.BUFFER_SIZE) + while len(chunk) > 0: + self.upload_chunk(chunk) + chunk = uploaded.read(self.BUFFER_SIZE) + else: + # File was uploaded via a POST, and is here. + for chunk in uploaded.chunks(): + self.upload_chunk(chunk) + return True + except: + # things went badly. + return False + + def upload_complete(self, request, filename, upload): + path = settings.MEDIA_URL + "/" + filename + self._dest.close() + + self._dir = settings.MEDIA_ROOT + + # Avoid File.objects.create, as this will try to make + # Another file copy at FileField's 'upload_to' dir + new_File = File() + new_File.file = os.path.join(self._dir, filename) + new_File.type = "N" # This field was initially not allowed NULL + if request.user.is_authenticated(): + new_File.owner = request.user + else: + new_File.owner, _created = User.objects.get_or_create(username=self.DEFAULT_UPLOADER_USERNAME) + new_File.save() + #print "uploaded file saved!" + if not request.user.is_authenticated(): + #print 'adding unclaimed files to session' + if self.SESSION_UNCLAIMED_FILES_KEY in request.session: + request.session[self.SESSION_UNCLAIMED_FILES_KEY].append(new_File.pk) + else: + request.session['unclaimed_files'] = [new_File.pk] + + # Asynchronously process document with Google Documents API + print "upload_complete, firing task" + tasks.process_document.delay(new_File) + + return {"path": path, "file_pk": new_File.pk, "file_url": new_File.get_absolute_url()} + + def update_filename(self, request, filename): + """ + Returns a new name for the file being uploaded. + Ensure file with name doesn't exist, and if it does, + create a unique filename to avoid overwriting + """ + self._dir = settings.MEDIA_ROOT + unique_filename = False + filename_suffix = 0 + + #print "orig filename: " + os.path.join(self._dir, filename) + + # Check if file at filename exists + if os.path.isfile(os.path.join(self._dir, filename)): + while not unique_filename: + try: + if filename_suffix == 0: + open(os.path.join(self._dir, filename)) + else: + filename_no_extension, extension = os.path.splitext(filename) + #print "filename all ready exists. Trying " + filename_no_extension + str(filename_suffix) + extension + open(os.path.join(self._dir, filename_no_extension + str(filename_suffix) + extension)) + filename_suffix += 1 + except IOError: + unique_filename = True + + if filename_suffix == 0: + #print "using filename: " + os.path.join(self._dir, filename) + return filename + else: + #print "using filename: " + filename_no_extension + str(filename_suffix) + extension + return filename_no_extension + str(filename_suffix) + extension + diff --git a/karmaworld/apps/ajaxuploader/backends/s3.py b/karmaworld/apps/ajaxuploader/backends/s3.py new file mode 100644 index 0000000..be11d13 --- /dev/null +++ b/karmaworld/apps/ajaxuploader/backends/s3.py @@ -0,0 +1,33 @@ +from multiprocessing import Pool +from StringIO import StringIO + +import boto +from django.conf import settings + +from ajaxuploader.backends.base import AbstractUploadBackend + +class S3UploadBackend(AbstractUploadBackend): + NUM_PARALLEL_PROCESSES = 4 + + def upload_chunk(self, chunk): + self._counter += 1 + buffer = StringIO() + buffer.write(chunk) + self._pool.apply_async( + self._mp.upload_part_from_file(buffer, self._counter)) + buffer.close() + + + def setup(self, filename): + self._bucket = boto.connect_s3(settings.AWS_ACCESS_KEY_ID, + settings.AWS_SECRET_ACCESS_KEY)\ + .lookup(settings.AWS_BUCKET_NAME) + self._mp = self._bucket.initiate_multipart_upload(filename) + self._pool = Pool(processes=self.NUM_PARALLEL_PROCESSES) + self._counter = 0 + + def upload_complete(self, request, filename): + # Tie up loose ends, and finish the upload + self._pool.close() + self._pool.join() + self._mp.complete_upload() diff --git a/karmaworld/apps/ajaxuploader/models.py b/karmaworld/apps/ajaxuploader/models.py new file mode 100644 index 0000000..e69de29 diff --git a/karmaworld/apps/ajaxuploader/templates/ajaxuploader/sample.html b/karmaworld/apps/ajaxuploader/templates/ajaxuploader/sample.html new file mode 100644 index 0000000..88b33e7 --- /dev/null +++ b/karmaworld/apps/ajaxuploader/templates/ajaxuploader/sample.html @@ -0,0 +1,40 @@ + + + + + + + +
+ +
+ + + + diff --git a/karmaworld/apps/ajaxuploader/views.py b/karmaworld/apps/ajaxuploader/views.py new file mode 100644 index 0000000..e8e5dd9 --- /dev/null +++ b/karmaworld/apps/ajaxuploader/views.py @@ -0,0 +1,62 @@ +from django.utils import simplejson as json +from django.core.serializers.json import DjangoJSONEncoder + +from django.http import HttpResponse, HttpResponseBadRequest, Http404 + +from ajaxuploader.backends.local import LocalUploadBackend + +class AjaxFileUploader(object): + def __init__(self, backend=None, **kwargs): + if backend is None: + backend = LocalUploadBackend + self.get_backend = lambda: backend(**kwargs) + + def __call__(self,request): + return self._ajax_upload(request) + + def _ajax_upload(self, request): + if request.method == "POST": + if request.is_ajax(): + # the file is stored raw in the request + upload = request + is_raw = True + # AJAX Upload will pass the filename in the querystring if it + # is the "advanced" ajax upload + try: + filename = request.GET['qqfile'] + except KeyError: + return HttpResponseBadRequest("AJAX request not valid") + # not an ajax upload, so it was the "basic" iframe version with + # submission via form + else: + is_raw = False + if len(request.FILES) == 1: + # FILES is a dictionary in Django but Ajax Upload gives + # the uploaded file an ID based on a random number, so it + # cannot be guessed here in the code. Rather than editing + # Ajax Upload to pass the ID in the querystring, observe + # that each upload is a separate request, so FILES should + # only have one entry. Thus, we can just grab the first + # (and only) value in the dict. + upload = request.FILES.values()[0] + else: + raise Http404("Bad Upload") + filename = upload.name + + backend = self.get_backend() + + # custom filename handler + filename = (backend.update_filename(request, filename) + or filename) + # save the file + backend.setup(filename) + success = backend.upload(upload, filename, is_raw) + # callback + extra_context = backend.upload_complete(request, filename, upload) + + # let Ajax Upload know whether we saved it or not + ret_json = {'success': success, 'filename': filename} + if extra_context is not None: + ret_json.update(extra_context) + + return HttpResponse(json.dumps(ret_json, cls=DjangoJSONEncoder)) diff --git a/karmaworld/apps/notes/gdrive.py b/karmaworld/apps/notes/gdrive.py new file mode 100644 index 0000000..9aadfaa --- /dev/null +++ b/karmaworld/apps/notes/gdrive.py @@ -0,0 +1,163 @@ +#!/usr/bin/env python +# -*- coding:utf8 -*- +# Copyright (C) 2012 FinalsClub Foundation + +import datetime +import mimetypes +import os + +import httplib2 +from apiclient.discovery import build +from apiclient.http import MediaFileUpload +from oauth2client.client import flow_from_clientsecrets + +from kamraworld.apps.notes.models import DriveAuth, Note + +CLIENT_SECRET = './notes/client_secrets.json' # FIXME +from credentials import GOOGLE_USER # FIXME +EXT_TO_MIME = {'.docx': 'application/msword'} + +def build_flow(): + """ Create an oauth2 autentication object with our preferred details """ + scopes = [ + 'https://www.googleapis.com/auth/drive', + 'https://www.googleapis.com/auth/drive.file', + 'https://www.googleapis.com/auth/userinfo.email', + 'https://www.googleapis.com/auth/userinfo.profile', + ] + + flow = flow_from_clientsecrets(CLIENT_SECRET, ' '.join(scopes), \ + redirect_uri='http://localhost:8000/oauth2callback') + flow.params['access_type'] = 'offline' + flow.params['approval_prompt'] = 'force' + flow.params['user_id'] = GOOGLE_USER + return flow + + +def authorize(): + """ Use an oauth2client flow object to generate the web url to create a new + auth that can be then stored """ + flow = build_flow() + print flow.step1_get_authorize_url() + + +def accept_auth(code): + """ Callback endpoint for accepting the post `authorize()` google drive + response, and generate a credentials object + :code: An authentication token from a WEB oauth dialog + returns a oauth2client credentials object """ + flow = build_flow() + creds = flow.step2_exchange(code) + return creds + + +def build_api_service(creds): + http = httplib2.Http() + http = creds.authorize(http) + return build('drive', 'v2', http=http), http + + +def check_and_refresh(creds, auth): + """ Check a Credentials object's expiration token + if it is out of date, refresh the token and save + :creds: a Credentials object + :auth: a DriveAuth that backs the cred object + :returns: updated creds and auth objects + """ + if creds.token_expiry < datetime.datetime.utcnow(): + # if we are passed the token expiry, + # refresh the creds and store them + http = httplib2.Http() + http = creds.authorize(http) + creds.refresh(http) + auth.credentials = creds.to_json() + auth.save() + return creds, auth + + +def convert_with_google_drive(note): + """ Upload a local note and download HTML + using Google Drive + :note: a File model instance # FIXME + """ + # Get file_type and encoding of uploaded file + # i.e: file_type = 'text/plain', encoding = None + (file_type, encoding) = mimetypes.guess_type(note.file.path) + + # If mimetype cannot be guessed + # Check against known issues, then + # finally, Raise Exception + # Extract file extension and compare it to EXT_TO_MIME dict + + fileName, fileExtension = os.path.splitext(note.file.path) + + if file_type == None: + + if fileExtension.strip().lower() in EXT_TO_MIME: + file_type = EXT_TO_MIME[fileExtension.strip().lower()] + # If boy mimetypes.guess_type and EXT_TO_MIME fail to cover + # file, return error + else: + raise Exception('Unknown file type') + + resource = { + 'title': note.title, + 'desc': note.description, + 'mimeType': file_type + } + # TODO: set the permission of the file to permissive so we can use the + # gdrive_url to serve files directly to users + media = MediaFileUpload(note.file.path, mimetype=file_type, + chunksize=1024*1024, resumable=True) + + auth = DriveAuth.objects.filter(email=GOOGLE_USER).all()[0] + creds = auth.transform_to_cred() + + + creds, auth = check_and_refresh(creds, auth) + + service, http = build_api_service(creds) + + # Upload the file + # TODO: wrap this in a try loop that does a token refresh if it fails + print "Trying to upload document" + file_dict = service.files().insert(body=resource, media_body=media, convert=True).execute() + + # set note.is_pdf + if file_type == 'application/pdf': + # Get a new copy of the file from the database with the new metadata from filemeta + new_file = File.objects.get(id=note.id) + # If it's a pdf, instead save an embed_url from resource['selfLink'] + new_file.is_pdf = True + new_file.embed_url = file_dict[u'selfLink'] + new_file.gdrive_url = file_dict[u'downloadUrl'] + else: + # get the converted filetype urls + download_urls = {} + download_urls['html'] = file_dict[u'exportLinks']['text/html'] + download_urls['text'] = file_dict[u'exportLinks']['text/plain'] + content_dict = {} + + + for download_type, download_url in download_urls.items(): + print "\n%s -- %s" % (download_type, download_urls) + resp, content = http.request(download_url, "GET") + + + if resp.status in [200]: + print "\t downloaded!" + # save to the File.property resulting field + content_dict[download_type] = content + else: + print "\t Download failed: %s" % resp.status + + # Get a new copy of the file from the database with the new metadata from filemeta + new_file = Note.objects.get(id=note.id) + + # set the .odt as the download from google link + new_file.gdrive_url = file_dict[u'exportLinks']['application/vnd.oasis.opendocument.text'] + new_file.html = content_dict['html'] + new_file.text = content_dict['text'] + + # Finally, save whatever data we got back from google + new_file.save()