Public speaking course notes Read "Dynamo, Amazon’s Highly Available Key-value Store" Read "Bigtable, A Distributed Storage System for Structured Data" Read "Streaming Systems" 3, Watermarks Read "Streaming Systems" 1&2, Streaming 101 Read "F1, a distributed SQL database that scales" Read "Zanzibar, Google’s Consistent, Global Authorization System" Read "Spanner, Google's Globally-Distributed Database" Read "Designing Data-intensive applications" 12, The Future of Data Systems IOS development with Swift Read "Designing Data-intensive applications" 10&11, Batch and Stream Processing Read "Designing Data-intensive applications" 9, Consistency and Consensus Read "Designing Data-intensive applications" 8, Distributed System Troubles Read "Designing Data-intensive applications" 7, Transactions Read "Designing Data-intensive applications" 6, Partitioning Read "Designing Data-intensive applications" 5, Replication Read "Designing Data-intensive applications" 3&4, Storage, Retrieval, Encoding Read "Designing Data-intensive applications" 1&2, Foundation of Data Systems Three cases of binary search TAMU Operating System 2 Memory Management TAMU Operating System 1 Introduction Overview in cloud computing 2 TAMU Operating System 7 Virtualization TAMU Operating System 6 File System TAMU Operating System 5 I/O and Disk Management TAMU Operating System 4 Synchronization TAMU Operating System 3 Concurrency and Threading TAMU Computer Networks 5 Data Link Layer TAMU Computer Networks 4 Network Layer TAMU Computer Networks 3 Transport Layer TAMU Computer Networks 2 Application Layer TAMU Computer Networks 1 Introduction Overview in distributed systems and cloud computing 1 A well-optimized Union-Find implementation, in Java A heap implementation supporting deletion TAMU Advanced Algorithms 3, Maximum Bandwidth Path (Dijkstra, MST, Linear) TAMU Advanced Algorithms 2, B+ tree and Segment Intersection TAMU Advanced Algorithms 1, BST, 2-3 Tree and Heap TAMU AI, Searching problems Factorization Machine and Field-aware Factorization Machine for CTR prediction TAMU Neural Network 10 Information-Theoretic Models TAMU Neural Network 9 Principal Component Analysis TAMU Neural Network 8 Neurodynamics TAMU Neural Network 7 Self-Organizing Maps TAMU Neural Network 6 Deep Learning Overview TAMU Neural Network 5 Radial-Basis Function Networks TAMU Neural Network 4 Multi-Layer Perceptrons TAMU Neural Network 3 Single-Layer Perceptrons Princeton Algorithms P1W6 Hash Tables & Symbol Table Applications Stanford ML 11 Application Example Photo OCR Stanford ML 10 Large Scale Machine Learning Stanford ML 9 Anomaly Detection and Recommender Systems Stanford ML 8 Clustering & Principal Component Analysis Princeton Algorithms P1W5 Balanced Search Trees TAMU Neural Network 2 Learning Processes TAMU Neural Network 1 Introduction Stanford ML 7 Support Vector Machine Stanford ML 6 Evaluate Algorithms Princeton Algorithms P1W4 Priority Queues and Symbol Tables Stanford ML 5 Neural Networks Learning Princeton Algorithms P1W3 Mergesort and Quicksort Stanford ML 4 Neural Networks Basics Princeton Algorithms P1W2 Stack and Queue, Basic Sorts Stanford ML 3 Classification Problems Stanford ML 2 Multivariate Regression and Normal Equation Princeton Algorithms P1W1 Union and Find Stanford ML 1 Introduction and Parameter Learning

Logging in python

2016-07-10

Logging

import logging

Quick Start

After importing logging, just use logging.waring() or logging.error() directly. The default level for logging is DEBUG, so warring information will be printed. info have lower level than DEBUG and thus will not be printed.

If not configured and logging will print information to stdout like screen.

#!/usr/local/bin/python
# -*- coding:utf-8 -*-
import logging
logging.warning('Watch out!')  # print message to console
logging.info('I told you so')  # will not print anything

Log to file

It’s more common to log all the information to a log file. And it’s necessary to use logging.basicConfig() to specify params like level etc.

Level Value
CRITICAL 50
ERROR 40
WARNING 30
INFO 20
DEBUG 10

If you set level to INFO then DEBUG level information will not be printed. Common API like debug(), info(), warning(), error() and critical() corrospond to different levels of log.

Note that write the code below into a python file like test_log.py. If you put it directly in ide, there will not be log file generated.

import logging
logging.basicConfig(filename='example.log',level=logging.DEBUG,filemode='w')
logging.debug('This message should go to the log file')
logging.info('So should this')
logging.warning('And this, too')
cat example.log
DEBUG:root:This message should go to the log file
INFO:root:So should this
WARNING:root:And this, too

Change format of Log

Use format to configure the format of logging information.

import logging
logging.basicConfig(format='%(levelname)s:%(message)s',level=logging.DEBUG)
logging.debug('This message should appear on the console')
logging.info('So should this')
logging.warning('And this, too')

DEBUG:This message should appear on the console
INFO:So should this
WARNING:And this, too

Log time

Use datafmt param to change the format of logging time.

import logging
logging.basicConfig(format='%(asctime)s %(message)s',datefmt='%m/%d/%Y %I:%M:%S %p')
logging.warning('is when this event was logged.')

07/16/2016 12:10:35 AM is when this event was logged.

More control over Log

So far, we only use default configuration. In fact we have more control over how information should be logged. For example, output log information to stdout and log file.

A simple understanding of several concepts are useful.

  • Logger
  • Handler sent log information to destination
  • Filter can filter out some log
  • Formatter better control over formating

First, create a logger, and add some handlers to it, and output to different places, like file log.txt or stdout.

import logging
# create logger with name
# if not specified, it will be root
logger = logging.getLogger('my_logger')
logger.setLevel(logging.DEBUG)

# create a handler, write to log.txt
# logging.FileHandler(self, filename, mode='a', encoding=None, delay=0)
# A handler class which writes formatted logging records to disk files.
fh = logging.FileHandler('log.txt')
fh.setLevel(logging.DEBUG)

# create another handler, for stdout in terminal
# A handler class which writes logging records to a stream
sh = logging.StreamHandler()
sh.setLevel(logging.DEBUG)

# set formatter
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
sh.setFormatter(formatter)

# add handler to logger
logger.addHandler(fh)
logger.addHandler(sh)

# log it
logger.debug('Debug')
logger.info('Info')
2016-07-18 21:43:14,648 - my_logger - DEBUG - Debug
2016-07-18 21:43:14,650 - my_logger - INFO - Info

Creative Commons License
Melon blog is created by melonskin. This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
© 2016-2024. All rights reserved by melonskin. Powered by Jekyll.