? f - failed
? E - error
? s - skipped
? x - xfailed
? X - xpassed
? p - passed
? P - passed with output
# for groups
? a - all except pP
? A - all
? N - none, this can be used to display nothing (since fE is the default)
pytest -x --pdb # drop to PDB on first failure, then end test session
pytest --pdb --maxfail=3# drop to PDB for first three failures
开始时跳到PDB调试器
pytest --trace
# This will invoke the Python debugger at the start of every test.
设置断点
import pdb
pdb.set_trace()
内置断点函数
# When breakpoint() is called and PYTHONBREAKPOINT is set to the default value, pytest will use the# custom internal PDB trace UI instead of the system default Pdb
分析测试执行持续时间
# To get a list of the slowest 10 test durations:
pytest --durations=10
faulthandler
# 超时 (不可用 待深究)faulthandler_timeout=X
### Creating JUnitXML format files
### Calling pytest from Python code
assert a % 2==0, "value was odd, should be even"
关于预期异常的断言 p26
import pytest
deftest_zero_division():with pytest.raises(ZeroDivisionError):1/0#and if you need to have access to the actual exception info you may use:deftest_recursion_depth():with pytest.raises(RuntimeError)as excinfo:deff():
f()
f()assert"maximum recursion"instr(excinfo.value)defmyfunc():raise ValueError("Exception 123 raised")deftest_match():with pytest.raises(ValueError, match=r".* 123 .*"):
myfunc()
fixture management scales from simple unit to complex functional testing, allowing to parametrize fixtures and
tests according to configuration and component options, or to re-use fixtures across function, class, module or
whole test session scopes.
内置夹具
capfd Capture,as text, output to file descriptors 1 and 2.
capfdbinary Capture,as bytes, output to file descriptors 1 and 2.
caplog Control logging and access log entries.
capsys Capture,as text, output to sys.stdout and sys.stderr.
capsysbinary Capture,as bytes, output to sys.stdout and sys.stderr.
cache Store and retrieve values across pytest runs.
doctest_namespace Provide a dict injected into the docstests namespace.
monkeypatch Temporarily modify classes, functions, dictionaries, os.environ, and other objects.
pytestconfig Access to configuration values, pluginmanager and plugin hooks.
record_property Add extra properties to the test.
record_testsuite_property Add extra properties to the test suite.
recwarn Record warnings emitted by test functions.
request Provide information on the executing test function.
testdir Provide a temporary test directory to aid in running, and testing, pytest plugins.
tmp_path Provide a pathlib.Path object to a temporary directory which is unique to each test
function.
tmp_path_factory Make session-scoped temporary directories and return pathlib.Path objects.
tmpdir Provide a py.path.local object to a temporary directory which is unique to each test function;
replaced by tmp_path.
tmpdir_factory Make session-scoped temporary directories and return py.path.local objects;
replaced by tmp_path_factory.
# module
pytest test_mod.py::test_func
# method
pytest test_mod.py::TestClass::test_method
# by marker
pytest -m slow
# from packages
pytest --pyargs pkg.testing
修改堆栈打印
pytest --showlocals # show local variables in tracebacks
pytest -l # show local variables (shortcut)
pytest --tb=auto # (default) 'long' tracebacks for the first and last# entry, but 'short' style for the other entries
pytest --tb=long # exhaustive, informative traceback formatting
pytest --tb=short # shorter traceback format
pytest --tb=line # only one line per failure
pytest --tb=native # Python standard library formatting
pytest --tb=no # no traceback at all
@pytest.fixturedefmake_customer_record():def_make_customer_record(name):return{"name": name,"orders":[]}return _make_customer_record
deftest_customer_records(make_customer_record):
customer_1 = make_customer_record("Lisa")
customer_2 = make_customer_record("Mike")
customer_3 = make_customer_record("Meredith")# 管理数据@pytest.fixturedefmake_customer_record():
created_records =[]def_make_customer_record(name):
record = models.Customer(name=name, orders=[])
created_records.append(record)return record
yield _make_customer_record
for record in created_records:
record.destroy()deftest_customer_records(make_customer_record):
customer_1 = make_customer_record("Lisa")
customer_2 = make_customer_record("Mike")
customer_3 = make_customer_record("Meredith")
临时工作目录
# content of conftest.pyimport os
import shutil
import tempfile
import pytest
@pytest.fixturedefcleandir():
old_cwd = os.getcwd()
newpath = tempfile.mkdtemp()
os.chdir(newpath)yield
os.chdir(old_cwd)
shutil.rmtree(newpath)# content of test_tmpdir.pyimport os
deftest_create_file(tmpdir):
p = tmpdir.mkdir("sub").join("hello.txt")
p.write("content")assert p.read()=="content"assertlen(tmpdir.listdir())==1assert0
A xfail means that you expect a test to fail for some reason. A common example is a test for a feature not yet implemented, or a bug not yet fixed. When a test passes despite being expected to fail (marked with pytest.mark. xfail), it’s an xpass and will be reported in the test summary.
@pytest.mark.skip(reason="no way of currently testing this")
collect_ignore =["setup.py"]
collect_ignore_glob =["*_py2.py"]
pytest --collect-only
只收集用例,不执行
插件
? pytest-pep8: a --pep8 option to enable PEP8 compliance checking.
? pytest-flakes: check source code with pyflakes.
? pytest-timeout: to timeout tests based on function marks or global definitions.
? pytest-cov: coverage reporting, compatible with distributed testing
# If you want to find out which plugins are active in your environment you can type:# 查看启用了哪些插件以及环境配置
pytest --trace-config
# You can prevent plugins from loading or unregister them:# 禁用插件
pytest -p no:NAME
[pytest]
addopts = -p no:NAME
p131 日志
同名的测试用例:If you need to have test modules with the same name, you might add init.py files to your tests folder and subfolders, changing them to packages:
import psutil
for pid in psutil.pids():
environ = psutil.Process(pid).environ()if"PYTEST_CURRENT_TEST"in environ:print(f'pytest process {pid} running: {environ["PYTEST_CURRENT_TEST"]}')