2.7 查阅文档

发布时间 2023-05-25 17:28:37作者: AncilunKiang

2.7.1 查找模块中的所有函数和类

import torch

可以调用 dir 函数查询函数中有哪些模块和类。

以 “__”(双下划线) 开始和结束的函数是 Python 中的特殊对象,以 “_”(单下划线)开始的函数是内部函数,通常以上两种函数可以忽略。

dir(torch.distributions)
['AbsTransform',
 'AffineTransform',
 'Bernoulli',
 'Beta',
 'Binomial',
 'CatTransform',
 'Categorical',
 'Cauchy',
 'Chi2',
 'ComposeTransform',
 'ContinuousBernoulli',
 'CorrCholeskyTransform',
 'CumulativeDistributionTransform',
 'Dirichlet',
 'Distribution',
 'ExpTransform',
 'Exponential',
 'ExponentialFamily',
 'FisherSnedecor',
 'Gamma',
 'Geometric',
 'Gumbel',
 'HalfCauchy',
 'HalfNormal',
 'Independent',
 'IndependentTransform',
 'Kumaraswamy',
 'LKJCholesky',
 'Laplace',
 'LogNormal',
 'LogisticNormal',
 'LowRankMultivariateNormal',
 'LowerCholeskyTransform',
 'MixtureSameFamily',
 'Multinomial',
 'MultivariateNormal',
 'NegativeBinomial',
 'Normal',
 'OneHotCategorical',
 'OneHotCategoricalStraightThrough',
 'Pareto',
 'Poisson',
 'PowerTransform',
 'RelaxedBernoulli',
 'RelaxedOneHotCategorical',
 'ReshapeTransform',
 'SigmoidTransform',
 'SoftmaxTransform',
 'SoftplusTransform',
 'StackTransform',
 'StickBreakingTransform',
 'StudentT',
 'TanhTransform',
 'Transform',
 'TransformedDistribution',
 'Uniform',
 'VonMises',
 'Weibull',
 'Wishart',
 '__all__',
 '__builtins__',
 '__cached__',
 '__doc__',
 '__file__',
 '__loader__',
 '__name__',
 '__package__',
 '__path__',
 '__spec__',
 'bernoulli',
 'beta',
 'biject_to',
 'binomial',
 'categorical',
 'cauchy',
 'chi2',
 'constraint_registry',
 'constraints',
 'continuous_bernoulli',
 'dirichlet',
 'distribution',
 'exp_family',
 'exponential',
 'fishersnedecor',
 'gamma',
 'geometric',
 'gumbel',
 'half_cauchy',
 'half_normal',
 'identity_transform',
 'independent',
 'kl',
 'kl_divergence',
 'kumaraswamy',
 'laplace',
 'lkj_cholesky',
 'log_normal',
 'logistic_normal',
 'lowrank_multivariate_normal',
 'mixture_same_family',
 'multinomial',
 'multivariate_normal',
 'negative_binomial',
 'normal',
 'one_hot_categorical',
 'pareto',
 'poisson',
 'register_kl',
 'relaxed_bernoulli',
 'relaxed_categorical',
 'studentT',
 'transform_to',
 'transformed_distribution',
 'transforms',
 'uniform',
 'utils',
 'von_mises',
 'weibull',
 'wishart']

2.7.2 查找特定的函数和类的用法

可以调用 help 函数查看给定函数或类的更具体的说明。

help(torch.ones)
Help on built-in function ones in module torch:

ones(...)
    ones(*size, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
    
    Returns a tensor filled with the scalar value `1`, with the shape defined
    by the variable argument :attr:`size`.
    
    Args:
        size (int...): a sequence of integers defining the shape of the output tensor.
            Can be a variable number of arguments or a collection like a list or tuple.
    
    Keyword arguments:
        out (Tensor, optional): the output tensor.
        dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
            Default: if ``None``, uses a global default (see :func:`torch.set_default_tensor_type`).
        layout (:class:`torch.layout`, optional): the desired layout of returned Tensor.
            Default: ``torch.strided``.
        device (:class:`torch.device`, optional): the desired device of returned tensor.
            Default: if ``None``, uses the current device for the default tensor type
            (see :func:`torch.set_default_tensor_type`). :attr:`device` will be the CPU
            for CPU tensor types and the current CUDA device for CUDA tensor types.
        requires_grad (bool, optional): If autograd should record operations on the
            returned tensor. Default: ``False``.
    
    Example::
    
        >>> torch.ones(2, 3)
        tensor([[ 1.,  1.,  1.],
                [ 1.,  1.,  1.]])
    
        >>> torch.ones(5)
        tensor([ 1.,  1.,  1.,  1.,  1.])

练习

(1)在深度学习框架中查找任何函数或类的文档。请尝试在这个框架的官方网站上找到文档。

查找 torch.distributions.multinomial.Multinomial 的说明,其官方文档在此

help(torch.distributions.multinomial.Multinomial)
Help on class Multinomial in module torch.distributions.multinomial:

class Multinomial(torch.distributions.distribution.Distribution)
 |  Multinomial(total_count=1, probs=None, logits=None, validate_args=None)
 |  
 |  Creates a Multinomial distribution parameterized by :attr:`total_count` and
 |  either :attr:`probs` or :attr:`logits` (but not both). The innermost dimension of
 |  :attr:`probs` indexes over categories. All other dimensions index over batches.
 |  
 |  Note that :attr:`total_count` need not be specified if only :meth:`log_prob` is
 |  called (see example below)
 |  
 |  .. note:: The `probs` argument must be non-negative, finite and have a non-zero sum,
 |            and it will be normalized to sum to 1 along the last dimension. :attr:`probs`
 |            will return this normalized value.
 |            The `logits` argument will be interpreted as unnormalized log probabilities
 |            and can therefore be any real number. It will likewise be normalized so that
 |            the resulting probabilities sum to 1 along the last dimension. :attr:`logits`
 |            will return this normalized value.
 |  
 |  -   :meth:`sample` requires a single shared `total_count` for all
 |      parameters and samples.
 |  -   :meth:`log_prob` allows different `total_count` for each parameter and
 |      sample.
 |  
 |  Example::
 |  
 |      >>> m = Multinomial(100, torch.tensor([ 1., 1., 1., 1.]))
 |      >>> x = m.sample()  # equal probability of 0, 1, 2, 3
 |      tensor([ 21.,  24.,  30.,  25.])
 |  
 |      >>> Multinomial(probs=torch.tensor([1., 1., 1., 1.])).log_prob(x)
 |      tensor([-4.1338])
 |  
 |  Args:
 |      total_count (int): number of trials
 |      probs (Tensor): event probabilities
 |      logits (Tensor): event log probabilities (unnormalized)
 |  
 |  Method resolution order:
 |      Multinomial
 |      torch.distributions.distribution.Distribution
 |      builtins.object
 |  
 |  Methods defined here:
 |  
 |  __init__(self, total_count=1, probs=None, logits=None, validate_args=None)
 |      Initialize self.  See help(type(self)) for accurate signature.
 |  
 |  entropy(self)
 |      Returns entropy of distribution, batched over batch_shape.
 |      
 |      Returns:
 |          Tensor of shape batch_shape.
 |  
 |  expand(self, batch_shape, _instance=None)
 |      Returns a new distribution instance (or populates an existing instance
 |      provided by a derived class) with batch dimensions expanded to
 |      `batch_shape`. This method calls :class:`~torch.Tensor.expand` on
 |      the distribution's parameters. As such, this does not allocate new
 |      memory for the expanded distribution instance. Additionally,
 |      this does not repeat any args checking or parameter broadcasting in
 |      `__init__.py`, when an instance is first created.
 |      
 |      Args:
 |          batch_shape (torch.Size): the desired expanded size.
 |          _instance: new instance provided by subclasses that
 |              need to override `.expand`.
 |      
 |      Returns:
 |          New distribution instance with batch dimensions expanded to
 |          `batch_size`.
 |  
 |  log_prob(self, value)
 |      Returns the log of the probability density/mass function evaluated at
 |      `value`.
 |      
 |      Args:
 |          value (Tensor):
 |  
 |  sample(self, sample_shape=torch.Size([]))
 |      Generates a sample_shape shaped sample or sample_shape shaped batch of
 |      samples if the distribution parameters are batched.
 |  
 |  ----------------------------------------------------------------------
 |  Readonly properties defined here:
 |  
 |  logits
 |  
 |  mean
 |      Returns the mean of the distribution.
 |  
 |  param_shape
 |  
 |  probs
 |  
 |  support
 |      Returns a :class:`~torch.distributions.constraints.Constraint` object
 |      representing this distribution's support.
 |  
 |  variance
 |      Returns the variance of the distribution.
 |  
 |  ----------------------------------------------------------------------
 |  Data and other attributes defined here:
 |  
 |  __annotations__ = {'total_count': <class 'int'>}
 |  
 |  arg_constraints = {'logits': IndependentConstraint(Real(), 1), 'probs'...
 |  
 |  ----------------------------------------------------------------------
 |  Methods inherited from torch.distributions.distribution.Distribution:
 |  
 |  __repr__(self)
 |      Return repr(self).
 |  
 |  cdf(self, value)
 |      Returns the cumulative density/mass function evaluated at
 |      `value`.
 |      
 |      Args:
 |          value (Tensor):
 |  
 |  enumerate_support(self, expand=True)
 |      Returns tensor containing all values supported by a discrete
 |      distribution. The result will enumerate over dimension 0, so the shape
 |      of the result will be `(cardinality,) + batch_shape + event_shape`
 |      (where `event_shape = ()` for univariate distributions).
 |      
 |      Note that this enumerates over all batched tensors in lock-step
 |      `[[0, 0], [1, 1], ...]`. With `expand=False`, enumeration happens
 |      along dim 0, but with the remaining batch dimensions being
 |      singleton dimensions, `[[0], [1], ..`.
 |      
 |      To iterate over the full Cartesian product use
 |      `itertools.product(m.enumerate_support())`.
 |      
 |      Args:
 |          expand (bool): whether to expand the support over the
 |              batch dims to match the distribution's `batch_shape`.
 |      
 |      Returns:
 |          Tensor iterating over dimension 0.
 |  
 |  icdf(self, value)
 |      Returns the inverse cumulative density/mass function evaluated at
 |      `value`.
 |      
 |      Args:
 |          value (Tensor):
 |  
 |  perplexity(self)
 |      Returns perplexity of distribution, batched over batch_shape.
 |      
 |      Returns:
 |          Tensor of shape batch_shape.
 |  
 |  rsample(self, sample_shape=torch.Size([]))
 |      Generates a sample_shape shaped reparameterized sample or sample_shape
 |      shaped batch of reparameterized samples if the distribution parameters
 |      are batched.
 |  
 |  sample_n(self, n)
 |      Generates n samples or n batches of samples if the distribution
 |      parameters are batched.
 |  
 |  ----------------------------------------------------------------------
 |  Static methods inherited from torch.distributions.distribution.Distribution:
 |  
 |  set_default_validate_args(value)
 |      Sets whether validation is enabled or disabled.
 |      
 |      The default behavior mimics Python's ``assert`` statement: validation
 |      is on by default, but is disabled if Python is run in optimized mode
 |      (via ``python -O``). Validation may be expensive, so you may want to
 |      disable it once a model is working.
 |      
 |      Args:
 |          value (bool): Whether to enable validation.
 |  
 |  ----------------------------------------------------------------------
 |  Readonly properties inherited from torch.distributions.distribution.Distribution:
 |  
 |  batch_shape
 |      Returns the shape over which parameters are batched.
 |  
 |  event_shape
 |      Returns the shape of a single sample (without batching).
 |  
 |  mode
 |      Returns the mode of the distribution.
 |  
 |  stddev
 |      Returns the standard deviation of the distribution.
 |  
 |  ----------------------------------------------------------------------
 |  Data descriptors inherited from torch.distributions.distribution.Distribution:
 |  
 |  __dict__
 |      dictionary for instance variables (if defined)
 |  
 |  __weakref__
 |      list of weak references to the object (if defined)
 |  
 |  ----------------------------------------------------------------------
 |  Data and other attributes inherited from torch.distributions.distribution.Distribution:
 |  
 |  has_enumerate_support = False
 |  
 |  has_rsample = False