I am looking over this website but just can’t seem to figure out how to do this as it’s not working. I need to check if the current site user is logged in (authenticated), and am trying:
request.user.is_authenticated
despite being sure that the user is logged in, it returns just:
>
I’m able to do other requests (from the first section in the url above), such as:
is_authenticated is now an attribute in Django 1.10.
The method was removed in Django 2.0.
For Django 1.9 and older:
is_authenticated is a function. You should call it like
if request.user.is_authenticated():
# do something if the user is authenticated
As Peter Rowell pointed out, what may be tripping you up is that in the default Django template language, you don’t tack on parenthesis to call functions. So you may have seen something like this in template code:
{% if user.is_authenticated %}
However, in Python code, it is indeed a method in the User class.
回答 1
Django 1.10+
使用属性,而不是方法:
if request.user.is_authenticated:# <- no parentheses any more!# do something if the user is authenticated
在更新到属性request.user.is_authenticated后抛出异常TypeError: Object of type 'CallableBool' is not JSON serializable。解决方案是使用JsonResponse,它可以在序列化时正确处理CallableBool对象:
if request.user.is_authenticated: # <- no parentheses any more!
# do something if the user is authenticated
The use of the method of the same name is deprecated in Django 2.0, and is no longer mentioned in the Django documentation.
Note that for Django 1.10 and 1.11, the value of the property is a CallableBool and not a boolean, which can cause some strange bugs.
For example, I had a view that returned JSON
that after updated to the property request.user.is_authenticated was throwing the exception TypeError: Object of type 'CallableBool' is not JSON serializable. The solution was to use JsonResponse, which could handle the CallableBool object properly when serializing:
As Bernhard Vallant said, if you want a queryset which excludes the specified range ends you should consider his solution, which utilizes gt/lt (greater-than/less-than).
When doing django ranges with a filter make sure you know the difference between using a date object vs a datetime object. __range is inclusive on dates but if you use a datetime object for the end date it will not include the entries for that day if the time is not set.
returns all entries from startdate to enddate including entries on those dates. Bad example since this is returning entries a week into the future, but you get the drift.
You can get around the “impedance mismatch” caused by the lack of precision in the DateTimeField/date object comparison — that can occur if using range — by using a datetime.timedelta to add a day to last date in the range. This works like:
start = date(2012, 12, 11)
end = date(2012, 12, 18)
new_end = end + datetime.timedelta(days=1)
ExampleModel.objects.filter(some_datetime_field__range=[start, new_end])
As discussed previously, without doing something like this, records are ignored on the last day.
Edited to avoid the use of datetime.combine — seems more logical to stick with date instances when comparing against a DateTimeField, instead of messing about with throwaway (and confusing) datetime objects. See further explanation in comments below.
To make it more flexible, you can design a FilterBackend as below:
class AnalyticsFilterBackend(generic_filters.BaseFilterBackend):
def filter_queryset(self, request, queryset, view):
predicate = request.query_params # or request.data for POST
if predicate.get('from_date', None) is not None and predicate.get('to_date', None) is not None:
queryset = queryset.filter(your_date__range=(predicate['from_date'], predicate['to_date']))
if predicate.get('from_date', None) is not None and predicate.get('to_date', None) is None:
queryset = queryset.filter(your_date__gte=predicate['from_date'])
if predicate.get('to_date', None) is not None and predicate.get('from_date', None) is None:
queryset = queryset.filter(your_date__lte=predicate['to_date'])
return queryset
回答 6
今天仍然有意义。您也可以这样做:
import dateutil
import pytz
date = dateutil.parser.parse('02/11/2019').replace(tzinfo=pytz.UTC)
from django.core.management.base importBaseCommandclassCommand(BaseCommand):def handle(self,**options):# now do the things that you want with your models here
You’re not recommended to do that from the shell – and this is intended as you shouldn’t really be executing random scripts from the django environment (but there are ways around this, see the other answers).
If this is a script that you will be running multiple times, it’s a good idea to set it up as a custom command ie
$ ./manage.py my_command
to do this create a file in a subdir of management and commands of your app, ie
and in this file define your custom command (ensuring that the name of the file is the name of the command you want to execute from ./manage.py)
from django.core.management.base import BaseCommand
class Command(BaseCommand):
def handle(self, **options):
# now do the things that you want with your models here
import os,sys
sys.path.append('/path/to/myproject')
os.environ.setdefault("DJANGO_SETTINGS_MODULE","config.settings.file")import django
django.setup()import project.app.models
#do things with my models, yay
As other answers indicate but don’t explicitly state, what you may actually need is not necessarily to execute your script from the Django shell, but to access your apps without using the Django shell.
import os,sys
sys.path.append('/path/to/myproject')
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "config.settings.file")
import django
django.setup()
import project.app.models
#do things with my models, yay
Something I just found to be interesting is Django Scripts, which allows you to write scripts to be run with python manage.py runscript foobar. More detailed information on implementation and scructure can be found here, http://django-extensions.readthedocs.org/en/latest/index.html
回答 14
如果您使用的是虚拟环境,请尝试以下操作:
python manage.py shell
要使用这些命令,您必须位于虚拟环境中。为此用途:
工作vir_env_name
例如 :-
dc@dc-comp-4:~/mysite$ workon jango
(jango)dc@dc-comp-4:~/mysite$ python manage.py shell
Python 2.7.6 (default, Mar 22 2014, 22:59:56)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>>
for using those command you must be inside virtual enviroment. for this use :-
workon vir_env_name
for example :-
dc@dc-comp-4:~/mysite$ workon jango
(jango)dc@dc-comp-4:~/mysite$ python manage.py shell
Python 2.7.6 (default, Mar 22 2014, 22:59:56)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>>
Note :- Here mysite is my website name and jango is my virtual enviroment name
came here with the same question as the OP, and I found my favourite answer precisely in the mistake within the question, which works also in Python 3:
The django shell is the good way to execute a python module with the django environment, but it is not always easy and tiresome to import modules and execute functions manually especially without auto-completion. To resolve this, I created a small shell script “runscript.sh” that allows you to take full advantage of the auto-completion and the log history of the Linux console.
NB: Copy runscript.sh to the root project and set the execute right (chmod +x)
For example:
I want to run python function named show(a, b, c) in module do_somethings.py in myapp/do_folder/
import numpy as np
import codecs, json
a = np.arange(10).reshape(2,5)# a 2 by 5 array
b = a.tolist()# nested lists with same data, indices
file_path ="/path.json"## your path variable
json.dump(b, codecs.open(file_path,'w', encoding='utf-8'), separators=(',',':'), sort_keys=True, indent=4)### this saves the array in .json format
I regularly “jsonify” np.arrays. Try using the “.tolist()” method on the arrays first, like this:
import numpy as np
import codecs, json
a = np.arange(10).reshape(2,5) # a 2 by 5 array
b = a.tolist() # nested lists with same data, indices
file_path = "/path.json" ## your path variable
json.dump(b, codecs.open(file_path, 'w', encoding='utf-8'), separators=(',', ':'), sort_keys=True, indent=4) ### this saves the array in .json format
default should be a function that gets called for objects that can’t otherwise be serialized. … or raise a TypeError
In the default function check if the object is from the module numpy, if so either use ndarray.tolist for a ndarray or use .item for any other numpy specific type.
import numpy as np
def default(obj):
if type(obj).__module__ == np.__name__:
if isinstance(obj, np.ndarray):
return obj.tolist()
else:
return obj.item()
raise TypeError('Unknown type:', type(obj))
dumped = json.dumps(data, default=default)
data =[
arange(0,10,1, dtype=int).reshape((2,5)),
datetime(year=2017, month=1, day=19, hour=23, minute=00, second=00),1+2j,Decimal(42),Fraction(1,3),MyTestCls(s='ub', dct={'7':7}),# see later
set(range(7)),]# Encode with metadata to preserve types when decodingprint(dumps(data))
This is not supported by default, but you can make it work quite easily! There are several things you’ll want to encode if you want the exact same data back:
The data itself, which you can get with obj.tolist() as @travelingbones mentioned. Sometimes this may be good enough.
The data type. I feel this is important in quite some cases.
The dimension (not necessarily 2D), which could be derived from the above if you assume the input is indeed always a ‘rectangular’ grid.
The memory order (row- or column-major). This doesn’t often matter, but sometimes it does (e.g. performance), so why not save everything?
Furthermore, your numpy array could part of your data structure, e.g. you have a list with some matrices inside. For that you could use a custom encoder which basically does the above.
This should be enough to implement a solution. Or you could use json-tricks which does just this (and supports various other types) (disclaimer: I made it).
pip install json-tricks
Then
data = [
arange(0, 10, 1, dtype=int).reshape((2, 5)),
datetime(year=2017, month=1, day=19, hour=23, minute=00, second=00),
1 + 2j,
Decimal(42),
Fraction(1, 3),
MyTestCls(s='ub', dct={'7': 7}), # see later
set(range(7)),
]
# Encode with metadata to preserve types when decoding
print(dumps(data))
回答 6
嵌套字典中有一些numpy.ndarrays,我也遇到类似的问题。
def jsonify(data):
json_data = dict()for key, value in data.iteritems():if isinstance(value, list):# for lists
value =[ jsonify(item)if isinstance(item, dict)else item for item in value ]if isinstance(value, dict):# for nested lists
value = jsonify(value)if isinstance(key, int):# if key is integer: > to string
key = str(key)if type(value).__module__=='numpy':# if value is numpy.*: > to python list
value = value.tolist()
json_data[key]= value
return json_data
I had a similar problem with a nested dictionary with some numpy.ndarrays in it.
def jsonify(data):
json_data = dict()
for key, value in data.iteritems():
if isinstance(value, list): # for lists
value = [ jsonify(item) if isinstance(item, dict) else item for item in value ]
if isinstance(value, dict): # for nested lists
value = jsonify(value)
if isinstance(key, int): # if key is integer: > to string
key = str(key)
if type(value).__module__=='numpy': # if value is numpy.*: > to python list
value = value.tolist()
json_data[key] = value
return json_data
It could be noted that once I convert my arrays into a list before saving it in a JSON file, in my deployment right now anyways, once I read that JSON file for use later, I can continue to use it in a list form (as opposed to converting it back to an array).
AND actually looks nicer (in my opinion) on the screen as a list (comma seperated) vs. an array (not-comma seperated) this way.
Using @travelingbones’s .tolist() method above, I’ve been using as such (catching a few errors I’ve found too):
SAVE DICTIONARY
def writeDict(values, name):
writeName = DIR+name+'.json'
with open(writeName, "w") as outfile:
json.dump(values, outfile)
READ DICTIONARY
def readDict(name):
readName = DIR+name+'.json'
try:
with open(readName, "r") as infile:
dictValues = json.load(infile)
return(dictValues)
except IOError as e:
print(e)
return('None')
except ValueError as e:
print(e)
return('None')
Hope this helps!
回答 9
这是一个对我有用的实现,并删除了所有nan(假设它们是简单的对象(列表或字典)):
from numpy import isnan
def remove_nans(my_obj, val=None):if isinstance(my_obj, list):for i, item in enumerate(my_obj):if isinstance(item, list)or isinstance(item, dict):
my_obj[i]= remove_nans(my_obj[i], val=val)else:try:if isnan(item):
my_obj[i]= val
exceptException:passelif isinstance(my_obj, dict):for key, item in my_obj.iteritems():if isinstance(item, list)or isinstance(item, dict):
my_obj[key]= remove_nans(my_obj[key], val=val)else:try:if isnan(item):
my_obj[key]= val
exceptException:passreturn my_obj
Here is an implementation that work for me and removed all nans (assuming these are simple object (list or dict)):
from numpy import isnan
def remove_nans(my_obj, val=None):
if isinstance(my_obj, list):
for i, item in enumerate(my_obj):
if isinstance(item, list) or isinstance(item, dict):
my_obj[i] = remove_nans(my_obj[i], val=val)
else:
try:
if isnan(item):
my_obj[i] = val
except Exception:
pass
elif isinstance(my_obj, dict):
for key, item in my_obj.iteritems():
if isinstance(item, list) or isinstance(item, dict):
my_obj[key] = remove_nans(my_obj[key], val=val)
else:
try:
if isnan(item):
my_obj[key] = val
except Exception:
pass
return my_obj
This is a different answer, but this might help to help people who are trying to save data and then read it again.
There is hickle which is faster than pickle and easier.
I tried to save and read it in pickle dump but while reading there were lot of problems and wasted an hour and still didn’t find solution though I was working on my own data to create a chat bot.
with open("jsondontdoit.json", 'w') as fp:
for key in bests.keys():
if type(bests[key]) == np.ndarray:
bests[key] = bests[key].tolist()
continue
for idx in bests[key]:
if type(bests[key][idx]) == np.ndarray:
bests[key][idx] = bests[key][idx].tolist()
json.dump(bests, fp)
fp.close()
回答 12
使用NumpyEncoder它将成功处理json转储。不抛出-NumPy数组不是JSON可序列化的
import numpy as np
import json
from numpyencoder importNumpyEncoder
arr = array([0,239,479,717,952,1192,1432,1667], dtype=int64)
json.dumps(arr,cls=NumpyEncoder)
But luckily found the hint to resolve the error that was throwing
The serializing of the objects is applicable only for the following conversion
Mapping should be in following way
object – dict
array – list
string – string
integer – integer
If you scroll up to see the line number 10
prediction = loaded_model.predict(d) where this line of code was generating the output
of type array datatype , when you try to convert array to json format its not possible
Finally i found the solution just by converting obtained output to the type list by
following lines of code
from django.utils.encoding import smart_str
response =HttpResponse(mimetype='application/force-download')# mimetype is replaced by content_type for django 1.7
response['Content-Disposition']='attachment; filename=%s'% smart_str(file_name)
response['X-Sendfile']= smart_str(path_to_file)# It's usually a good idea to set the 'Content-Length' header too.# You can also set any other required headers: Cache-Control, etc.return response
For the “best of both worlds” you could combine S.Lott’s solution with the xsendfile module: django generates the path to the file (or the file itself), but the actual file serving is handled by Apache/Lighttpd. Once you’ve set up mod_xsendfile, integrating with your view takes a few lines of code:
from django.utils.encoding import smart_str
response = HttpResponse(mimetype='application/force-download') # mimetype is replaced by content_type for django 1.7
response['Content-Disposition'] = 'attachment; filename=%s' % smart_str(file_name)
response['X-Sendfile'] = smart_str(path_to_file)
# It's usually a good idea to set the 'Content-Length' header too.
# You can also set any other required headers: Cache-Control, etc.
return response
Of course, this will only work if you have control over your server, or your hosting company has mod_xsendfile already set up.
EDIT:
mimetype is replaced by content_type for django 1.7
The request’s GET or POST dictionary will have the "f=somefile.txt" information.
Your view function will simply merge the base path with the “f” value, open the file, create and return a response object. It should be less than 12 lines of code.
For a very simple but not efficient or scalable solution, you can just use the built in django serve view. This is excellent for quick prototypes or one-off work, but as has been mentioned throughout this question, you should use something like apache or nginx in production.
S.Lott has the “good”/simple solution, and elo80ka has the “best”/efficient solution. Here is a “better”/middle solution – no server setup, but more efficient for large files than the naive fix:
Basically, Django still handles serving the file but does not load the whole thing into memory at once. This allows your server to (slowly) serve a big file without ramping up the memory usage.
Again, S.Lott’s X-SendFile is still better for larger files. But if you can’t or don’t want to bother with that, then this middle solution will gain you better efficiency without the hassle.
Tried @Rocketmonkeys solution but downloaded files were being stored as *.bin and given random names. That’s not fine of course. Adding another line from @elo80ka solved the problem.
Here is the code I’m using now:
You can now store files in a private directory (not inside /media nor /public_html) and expose them via django to certain users or under certain circumstances.
Hope it helps.
Thanks to @elo80ka, @S.Lott and @Rocketmonkeys for the answers, got the perfect solution combining all of them =)
Just mentioning the FileResponse object available in Django 1.10
Edit: Just ran into my own answer while searching for an easy way to stream files via Django, so here is a more complete example (to future me). It assumes that the FileField name is imported_file
views.py
from django.views.generic.detail import DetailView
from django.http import FileResponse
class BaseFileDownloadView(DetailView):
def get(self, request, *args, **kwargs):
filename=self.kwargs.get('filename', None)
if filename is None:
raise ValueError("Found empty filename")
some_file = self.model.objects.get(imported_file=filename)
response = FileResponse(some_file.imported_file, content_type="text/csv")
# https://docs.djangoproject.com/en/1.11/howto/outputting-csv/#streaming-large-csv-files
response['Content-Disposition'] = 'attachment; filename="%s"'%filename
return response
class SomeFileDownloadView(BaseFileDownloadView):
model = SomeModel
It was mentioned above that the mod_xsendfile method does not allow for non-ASCII characters in filenames.
For this reason, I have a patch available for mod_xsendfile that will allow any file to be sent, as long as the name is url encoded, and the additional header:
def hello(request):// code to check or protect the file from unauthorized access
response =HttpResponse()
response['X-File']='/absolute/path/to/file'return response
You should use sendfile apis given by popular servers like apache or nginx
in production. Many years i was using sendfile api of these servers for protecting files. Then created a simple middleware based django app for this purpose suitable for both development & production purpose.You can access the source code here.
UPDATE: in new version python provider uses django FileResponse if available and also adds support for many server implementations from lighthttp, caddy to hiawatha
Usage
pip install django-fileprovider
add fileprovider app to INSTALLED_APPS settings,
add fileprovider.middleware.FileProviderMiddleware to MIDDLEWARE_CLASSES settings
set FILEPROVIDER_NAME settings to nginx or apache in production, by default it is python for development purpose.
in your classbased or function views set response header X-File value to absolute path to the file. For example,
def hello(request):
// code to check or protect the file from unauthorized access
response = HttpResponse()
response['X-File'] = '/absolute/path/to/file'
return response
django-fileprovider impemented in a way that your code will need only minimum modification.
Nginx configuration
To protect file from direct access you can set the configuration as
Django recommend that you use another server to serve static media (another server running on the same machine is fine.) They recommend the use of such servers as lighttp.
This is very simple to set up. However. if ‘somefile.txt’ is generated on request (content is dynamic) then you may want django to serve it.
I have faced the same problem more then once and so implemented using xsendfile module and auth view decorators the django-filelibrary. Feel free to use it as inspiration for your own solution.
This module provides a simple way to serve files for download in django rest framework using Apache module Xsendfile. It also has an additional feature of serving downloads only to users belonging to a particular group
For every field that has choices set, the object will have a get_FOO_display() method, where FOO is the name of the field. This method returns the “human-readable” value of the field.
In Views
person = Person.objects.filter(to_be_listed=True)
context['gender'] = person.get_gender_display()
Downloading/unpacking psycopg2==2.4.4Downloading psycopg2-2.4.4.tar.gz (648kB):648kB downloaded
Running setup.py (path:/home/muhammadtaqi/Projects/MyProjects/OnlineElectionCampaign/venv/build/psycopg2/setup.py) egg_info forpackage psycopg2
Error:You need to install postgresql-server-dev-X.Y for building a server-side extension or libpq-dev for building a client-side application.Complete output from command python setup.py egg_info:
running egg_info
creating pip-egg-info/psycopg2.egg-info
writing pip-egg-info/psycopg2.egg-info/PKG-INFO
writing top-level names to pip-egg-info/psycopg2.egg-info/top_level.txt
writing dependency_links to pip-egg-info/psycopg2.egg-info/dependency_links.txt
writing manifest file 'pip-egg-info/psycopg2.egg-info/SOURCES.txt'
warning: manifest_maker: standard file '-c'not found
Error:You need to install postgresql-server-dev-X.Y for building a server-side extension or libpq-dev for building a client-side application.----------------------------------------Cleaning up...Command python setup.py egg_info failed with error code 1in/home/muhammadtaqi/Projects/MyProjects/OnlineElectionCampaign/venv/build/psycopg2
Storing debug log for failure in/home/muhammadtaqi/.pip/pip.log
I am working on Django project with virtualenv and connect it to local postgres database. when i run the project is says,
ImportError: No module named psycopg2.extensions
then i used this command to install
pip install psycopg2
then during the installation it gives following error.
Downloading/unpacking psycopg2==2.4.4
Downloading psycopg2-2.4.4.tar.gz (648kB): 648kB downloaded
Running setup.py (path:/home/muhammadtaqi/Projects/MyProjects/OnlineElectionCampaign/venv/build/psycopg2/setup.py) egg_info for package psycopg2
Error: You need to install postgresql-server-dev-X.Y for building a server-side extension or libpq-dev for building a client-side application.
Complete output from command python setup.py egg_info:
running egg_info
creating pip-egg-info/psycopg2.egg-info
writing pip-egg-info/psycopg2.egg-info/PKG-INFO
writing top-level names to pip-egg-info/psycopg2.egg-info/top_level.txt
writing dependency_links to pip-egg-info/psycopg2.egg-info/dependency_links.txt
writing manifest file 'pip-egg-info/psycopg2.egg-info/SOURCES.txt'
warning: manifest_maker: standard file '-c' not found
Error: You need to install postgresql-server-dev-X.Y for building a server-side extension or libpq-dev for building a client-side application.
----------------------------------------
Cleaning up...
Command python setup.py egg_info failed with error code 1 in /home/muhammadtaqi/Projects/MyProjects/OnlineElectionCampaign/venv/build/psycopg2
Storing debug log for failure in /home/muhammadtaqi/.pip/pip.log
They changed the packaging for psycopg2. Installing the binary version fixed this issue for me. The above answers still hold up if you want to compile the binary yourself.
You must setup postgresql-server-dev-X.Y, where X.Y. your’s servers version, and it will install libpq-dev and other servers variables at modules for server side developing.
In my case it was
apt-get install postgresql-server-dev-9.5
Reading package lists… Done
Building dependency tree Reading state information… Done The
following packages were automatically installed and are no longer
required: libmysqlclient18 mysql-common Use ‘apt-get autoremove’ to
remove them. The following extra packages will be installed:
libpq-dev Suggested packages: postgresql-doc-10 The following NEW
packages will be installed: libpq-dev postgresql-server-dev-9.5
I was using a virtual environment on Ubuntu 18.04, and since I only wanted to install it as a client, I only had to do:
sudo apt install libpq-dev
pip install psycopg2
And installed without problems. Of course, you can use the binary as other answers said, but I preferred this solution since it was stated in a requirements.txt file.
classTankJournal(models.Model):
user = models.ForeignKey(User)
tank = models.ForeignKey(TankProfile)
ts = models.IntegerField(max_length=15)
title = models.CharField(max_length=50)
body = models.TextField()
我也有上述模型的模型形式,如下所示:
classJournalForm(ModelForm):
tank = forms.IntegerField(widget=forms.HiddenInput())classMeta:
model =TankJournal
exclude =('user','ts')
我想知道如何为该坦克隐藏字段设置默认值。这是我到目前为止显示/保存表格的功能:
def addJournal(request, id=0):ifnot request.user.is_authenticated():returnHttpResponseRedirect('/')# checking if they own the tankfrom django.contrib.auth.models importUser
user =User.objects.get(pk=request.session['id'])if request.method =='POST':
form =JournalForm(request.POST)if form.is_valid():
obj = form.save(commit=False)# setting the user and tsfrom time import time
obj.ts = int(time())
obj.user = user
obj.tank =TankProfile.objects.get(pk=form.cleaned_data['tank_id'])# saving the test
obj.save()else:
form =JournalForm()try:
tank =TankProfile.objects.get(user=user, id=id)exceptTankProfile.DoesNotExist:returnHttpResponseRedirect('/error/')
class TankJournal(models.Model):
user = models.ForeignKey(User)
tank = models.ForeignKey(TankProfile)
ts = models.IntegerField(max_length=15)
title = models.CharField(max_length=50)
body = models.TextField()
I also have a model form for the above model as follows:
class JournalForm(ModelForm):
tank = forms.IntegerField(widget=forms.HiddenInput())
class Meta:
model = TankJournal
exclude = ('user','ts')
I want to know how to set the default value for that tank hidden field. Here is my function to show/save the form so far:
def addJournal(request, id=0):
if not request.user.is_authenticated():
return HttpResponseRedirect('/')
# checking if they own the tank
from django.contrib.auth.models import User
user = User.objects.get(pk=request.session['id'])
if request.method == 'POST':
form = JournalForm(request.POST)
if form.is_valid():
obj = form.save(commit=False)
# setting the user and ts
from time import time
obj.ts = int(time())
obj.user = user
obj.tank = TankProfile.objects.get(pk=form.cleaned_data['tank_id'])
# saving the test
obj.save()
else:
form = JournalForm()
try:
tank = TankProfile.objects.get(user=user, id=id)
except TankProfile.DoesNotExist:
return HttpResponseRedirect('/error/')
As explained in Django docs, initial is not default.
The initial value of a field is intended to be displayed in an HTML . But if the user delete this value, and finally send back a blank value for this field, the initial value is lost. So you do not obtain what is expected by a default behaviour.
The default behaviour is : the value that validation process will take if data argument do not contain any value for the field.
To implement that, a straightforward way is to combine initial and clean_<field>():
class JournalForm(ModelForm):
tank = forms.IntegerField(widget=forms.HiddenInput(), initial=123)
(...)
def clean_tank(self):
if not self['tank'].html_name in self.data:
return self.fields['tank'].initial
return self.cleaned_data['tank']
render() is a brand spanking new shortcut for render_to_response in 1.3 that will automatically use RequestContext that I will most definitely be using from now on.
2020 EDIT: It should be noted that render_to_response() was removed in Django 3.0
render_to_response is your standard render function used in the tutorials and such. To use RequestContext you’d have to specify context_instance=RequestContext(request)
direct_to_template is a generic view that I use in my views (as opposed to in my urls) because like the new render() function, it automatically uses RequestContext and all its context_processors.
render() is a shortcut for render_to_response() that automatically supplies context_instance=Request….
Its available in the django development version (1.2.1) but many have created their own shortcuts such as this one, this one or the one that threw me initially, Nathans basic.tools.shortcuts.py
回答 2
渲染为
def render(request,*args,**kwargs):""" Simple wrapper for render_to_response. """
kwargs['context_instance']=RequestContext(request)return render_to_response(*args,**kwargs)
render() is the same as a call to
render_to_response() with a
context_instance argument that that
forces the use of a RequestContext.
direct_to_template is something different. It’s a generic view that uses a data dictionary to render the html without the need of the views.py, you use it in urls.py. Docs here
What the third parameter context_instance actually does? Being RequestContext it sets up some basic context which is then added to user_context. So the template gets this extended context. What variables are added is given by TEMPLATE_CONTEXT_PROCESSORS in settings.py. For instance django.contrib.auth.context_processors.auth adds variable user and variable perm which are then accessible in the template.
You can’t by default. The dot is the separator / trigger for attribute lookup / key lookup / slice.
Dots have a special meaning in template rendering. A dot in a variable
name signifies a lookup. Specifically, when the template system
encounters a dot in a variable name, it tries the following lookups,
in this order:
Dictionary lookup. Example: foo[“bar”]
Attribute lookup. Example: foo.bar
List-index lookup. Example: foo[bar]
But you can make a filter which lets you pass in an argument:
# code for custom template tag
@register.filter(name='lookup')
def lookup(value, arg):
value_dict = ast.literal_eval(value)
return value_dict.get(arg)
<!--template tag (in the template)-->
{{ mydict|lookup:item.name }}
回答 6
环境:Django 2.2
示例代码:
from django.template.defaulttags import register
@register.filter(name='lookup')def lookup(value, arg):return value.get(arg)