Skip to content

Commit 17ab76a

Browse files
authored
Merge pull request #98 from vivekmig/DocStringFormatFix
Fixing docstring formatting
2 parents 6ced217 + 6b15ca6 commit 17ab76a

17 files changed

+188
-138
lines changed

captum/attr/_core/deep_lift.py

Lines changed: 23 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -144,22 +144,23 @@ def attribute(
144144
Default: False
145145
146146
Returns:
147-
148-
attributions (tensor or tuple of tensors): Attribution score
149-
computed based on DeepLift rescale rule with respect
150-
to each input feature. Attributions will always be
151-
the same size as the provided inputs, with each value
152-
providing the attribution of the corresponding input index.
153-
If a single tensor is provided as inputs, a single tensor is
154-
returned. If a tuple is provided for inputs, a tuple of
155-
corresponding sized tensors is returned.
156-
delta (tensor, optional): This is computed using the property that the total
157-
sum of forward_func(inputs) - forward_func(baselines)
158-
must equal the total sum of the attributions computed
159-
based on Deeplift's rescale rule.
160-
Delta is calculated per example, meaning that the number of
161-
elements in returned delta tensor is equal to the number of
162-
of examples in input.
147+
**attributions** or 2-element tuple of **attributions**, **delta**:
148+
- **attributions** (*tensor* or tuple of *tensors*):
149+
Attribution score computed based on DeepLift rescale rule with respect
150+
to each input feature. Attributions will always be
151+
the same size as the provided inputs, with each value
152+
providing the attribution of the corresponding input index.
153+
If a single tensor is provided as inputs, a single tensor is
154+
returned. If a tuple is provided for inputs, a tuple of
155+
corresponding sized tensors is returned.
156+
- **delta** (*tensor*, returned if return_convergence_delta=True):
157+
This is computed using the property that
158+
the total sum of forward_func(inputs) - forward_func(baselines)
159+
must equal the total sum of the attributions computed
160+
based on Deeplift's rescale rule.
161+
Delta is calculated per example, meaning that the number of
162+
elements in returned delta tensor is equal to the number of
163+
of examples in input.
163164
164165
Examples::
165166
@@ -435,16 +436,17 @@ def attribute(
435436
Default: False
436437
437438
Returns:
438-
439-
attributions (tensor or tuple of tensors): Attribution score
440-
computed based on DeepLift rescale rule with respect
441-
to each input feature. Attributions will always be
439+
**attributions** or 2-element tuple of **attributions**, **delta**:
440+
- **attributions** (*tensor* or tuple of *tensors*):
441+
Attribution score computed based on DeepLift rescale rule with
442+
respect to each input feature. Attributions will always be
442443
the same size as the provided inputs, with each value
443444
providing the attribution of the corresponding input index.
444445
If a single tensor is provided as inputs, a single tensor is
445446
returned. If a tuple is provided for inputs, a tuple of
446447
corresponding sized tensors is returned.
447-
delta (tensor, optional): This is computed using the property that the
448+
- **delta** (*tensor*, returned if return_convergence_delta=True):
449+
This is computed using the property that the
448450
total sum of forward_func(inputs) - forward_func(baselines)
449451
must be very close to the total sum of attributions
450452
computed based on approximated SHAP values using

captum/attr/_core/gradient_shap.py

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -141,16 +141,17 @@ def attribute(
141141
a tuple following attributions.
142142
Default: False
143143
Returns:
144-
145-
attributions (tensor or tuple of tensors): Attribution score
146-
computed based on GradientSHAP with respect
144+
**attributions** or 2-element tuple of **attributions**, **delta**:
145+
- **attributions** (*tensor* or tuple of *tensors*):
146+
Attribution score computed based on GradientSHAP with respect
147147
to each input feature. Attributions will always be
148148
the same size as the provided inputs, with each value
149149
providing the attribution of the corresponding input index.
150150
If a single tensor is provided as inputs, a single tensor is
151151
returned. If a tuple is provided for inputs, a tuple of
152152
corresponding sized tensors is returned.
153-
delta (tensor, optional): This is computed using the property that the total
153+
- **delta** (*tensor*, returned if return_convergence_delta=True):
154+
This is computed using the property that the total
154155
sum of forward_func(inputs) - forward_func(baselines)
155156
must be very close to the total sum of the attributions
156157
based on GradientSHAP.

captum/attr/_core/input_x_gradient.py

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -68,9 +68,10 @@ def attribute(self, inputs, target=None, additional_forward_args=None):
6868
to these arguments.
6969
Default: None
7070
71-
Return:
72-
73-
attributions (tensor or tuple of tensors): The input x gradient with
71+
Returns:
72+
*tensor* or tuple of *tensors* of **attributions**:
73+
- **attributions** (*tensor* or tuple of *tensors*):
74+
The input x gradient with
7475
respect to each input feature. Attributions will always be
7576
the same size as the provided inputs, with each value
7677
providing the attribution of the corresponding input index.

captum/attr/_core/integrated_gradients.py

Lines changed: 19 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -124,23 +124,25 @@ def attribute(
124124
is set to True convergence delta will be returned in
125125
a tuple following attributions.
126126
Default: False
127-
Return:
128-
129-
attributions (tensor or tuple of tensors): Integrated gradients with
130-
respect to each input feature. attributions will always be
131-
the same size as the provided inputs, with each value
132-
providing the attribution of the corresponding input index.
133-
If a single tensor is provided as inputs, a single tensor is
134-
returned. If a tuple is provided for inputs, a tuple of
135-
corresponding sized tensors is returned.
136-
delta (tensor, optional): The difference between the total approximated
137-
and true integrated gradients.
138-
This is computed using the property that the total sum of
139-
forward_func(inputs) - forward_func(baselines) must equal
140-
the total sum of the integrated gradient.
141-
Delta is calculated per example, meaning that the number of
142-
elements in returned delta tensor is equal to the number of
143-
of examples in inputs.
127+
Returns:
128+
**attributions** or 2-element tuple of **attributions**, **delta**:
129+
- **attributions** (*tensor* or tuple of *tensors*):
130+
Integrated gradients with respect to each input feature.
131+
attributions will always be the same size as the provided
132+
inputs, with each value providing the attribution of the
133+
corresponding input index.
134+
If a single tensor is provided as inputs, a single tensor is
135+
returned. If a tuple is provided for inputs, a tuple of
136+
corresponding sized tensors is returned.
137+
- **delta** (*tensor*, returned if return_convergence_delta=True):
138+
The difference between the total approximated and true
139+
integrated gradients. This is computed using the property
140+
that the total sum of forward_func(inputs) -
141+
forward_func(baselines) must equal the total sum of the
142+
integrated gradient.
143+
Delta is calculated per example, meaning that the number of
144+
elements in returned delta tensor is equal to the number of
145+
of examples in inputs.
144146
145147
Examples::
146148

captum/attr/_core/internal_influence.py

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717
class InternalInfluence(LayerAttribution):
1818
def __init__(self, forward_func, layer, device_ids=None):
1919
r"""
20-
Args
20+
Args:
2121
2222
forward_func (callable): The forward function of the model or any
2323
modification of it
@@ -57,7 +57,7 @@ def attribute(
5757
taking the layer as input, integrating the gradient of the layer with
5858
respect to the output.
5959
60-
Args
60+
Args:
6161
6262
inputs (tensor or tuple of tensors): Input for which internal
6363
influence is computed. If forward_func takes a single
@@ -133,9 +133,10 @@ def attribute(
133133
are processed in one batch.
134134
Default: None
135135
136-
Return
137-
138-
attributions (tensor): Internal influence of each neuron in given
136+
Returns:
137+
*tensor* of **attributions**:
138+
- **attributions** (*tensor*):
139+
Internal influence of each neuron in given
139140
layer output. Attributions will always be the same size
140141
as the output of the given layer.
141142

captum/attr/_core/layer_activation.py

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
class LayerActivation(LayerAttribution):
77
def __init__(self, forward_func, layer, device_ids=None):
88
r"""
9-
Args
9+
Args:
1010
1111
forward_func (callable): The forward function of the model or any
1212
modification of it
@@ -28,7 +28,7 @@ def attribute(self, inputs, additional_forward_args=None):
2828
r"""
2929
Computes activation of selected layer for given input.
3030
31-
Args
31+
Args:
3232
3333
inputs (tensor or tuple of tensors): Input for which layer
3434
activation is computed. If forward_func takes a single
@@ -51,9 +51,10 @@ def attribute(self, inputs, additional_forward_args=None):
5151
to these arguments.
5252
Default: None
5353
54-
Return
55-
56-
attributions (tensor): Activation of each neuron in given layer output.
54+
Returns:
55+
*tensor* of **attributions**:
56+
- **attributions** (*tensor*):
57+
Activation of each neuron in given layer output.
5758
Attributions will always be the same size as the
5859
output of the given layer.
5960

captum/attr/_core/layer_conductance.py

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717
class LayerConductance(LayerAttribution):
1818
def __init__(self, forward_func, layer, device_ids=None):
1919
r"""
20-
Args
20+
Args:
2121
2222
forward_func (callable): The forward function of the model or any
2323
modification of it
@@ -63,7 +63,7 @@ def attribute(
6363
features, utilize NeuronConductance instead, and provide the target
6464
neuron index.
6565
66-
Args
66+
Args:
6767
6868
inputs (tensor or tuple of tensors): Input for which layer
6969
conductance is computed. If forward_func takes a single
@@ -144,12 +144,14 @@ def attribute(
144144
a tuple following attributions.
145145
Default: False
146146
147-
Return
148-
149-
attributions (tensor): Conductance of each neuron in given layer output.
147+
Returns:
148+
**attributions** or 2-element tuple of **attributions**, **delta**:
149+
- **attributions** (*tensor*):
150+
Conductance of each neuron in given layer output.
150151
Attributions will always be the same size as the
151152
output of the given layer.
152-
delta (tensor, optional): The difference between the total
153+
- **delta** (*tensor*, returned if return_convergence_delta=True):
154+
The difference between the total
153155
approximated and true conductance.
154156
This is computed using the property that the total sum of
155157
forward_func(inputs) - forward_func(baselines) must equal

captum/attr/_core/layer_gradient_x_activation.py

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
class LayerGradientXActivation(LayerAttribution):
88
def __init__(self, forward_func, layer, device_ids=None):
99
r"""
10-
Args
10+
Args:
1111
1212
forward_func (callable): The forward function of the model or any
1313
modification of it
@@ -30,7 +30,7 @@ def attribute(self, inputs, target=None, additional_forward_args=None):
3030
Computes element-wise product of gradient and activation for selected
3131
layer on given inputs.
3232
33-
Args
33+
Args:
3434
3535
inputs (tensor or tuple of tensors): Input for which attributions
3636
are computed. If forward_func takes a single
@@ -78,9 +78,10 @@ def attribute(self, inputs, target=None, additional_forward_args=None):
7878
to these arguments.
7979
Default: None
8080
81-
Return
82-
83-
attributions (tensor): Product of gradient and activation for each
81+
Returns:
82+
*tensor* of **attributions**:
83+
- **attributions** (*tensor*):
84+
Product of gradient and activation for each
8485
neuron in given layer output.
8586
Attributions will always be the same size as the
8687
output of the given layer.

captum/attr/_core/neuron_conductance.py

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@
1919
class NeuronConductance(NeuronAttribution):
2020
def __init__(self, forward_func, layer, device_ids=None):
2121
r"""
22-
Args
22+
Args:
2323
2424
forward_func (callable): The forward function of the model or any
2525
modification of it
@@ -138,9 +138,10 @@ def attribute(
138138
processed in one batch.
139139
Default: None
140140
141-
Return:
142-
143-
attributions (tensor or tuple of tensors): Conductance for
141+
Returns:
142+
*tensor* or tuple of *tensors* of **attributions**:
143+
- **attributions** (*tensor* or tuple of *tensors*):
144+
Conductance for
144145
particular neuron with respect to each input feature.
145146
Attributions will always be the same size as the provided
146147
inputs, with each value providing the attribution of the

captum/attr/_core/neuron_gradient.py

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
class NeuronGradient(NeuronAttribution):
1616
def __init__(self, forward_func, layer, device_ids=None):
1717
r"""
18-
Args
18+
Args:
1919
2020
forward_func (callable): The forward function of the model or any
2121
modification of it
@@ -38,7 +38,7 @@ def attribute(self, inputs, neuron_index, additional_forward_args=None):
3838
Computes the gradient of the output of a particular neuron with
3939
respect to the inputs of the network.
4040
41-
Args
41+
Args:
4242
4343
inputs (tensor or tuple of tensors): Input for which neuron
4444
gradients are computed. If forward_func takes a single
@@ -68,13 +68,13 @@ def attribute(self, inputs, neuron_index, additional_forward_args=None):
6868
to these arguments.
6969
Default: None
7070
71-
Return
72-
73-
attributions (tensor or tuple of tensors): Gradients of
74-
particular neuron with respect to each input feature.
75-
Attributions will always be the same size as the provided
76-
inputs, with each value providing the attribution of the
77-
corresponding input index.
71+
Returns:
72+
*tensor* or tuple of *tensors* of **attributions**:
73+
- **attributions** (*tensor* or tuple of *tensors*):
74+
Gradients of particular neuron with respect to each input
75+
feature. Attributions will always be the same size as the
76+
provided inputs, with each value providing the attribution
77+
of the corresponding input index.
7878
If a single tensor is provided as inputs, a single tensor is
7979
returned. If a tuple is provided for inputs, a tuple of
8080
corresponding sized tensors is returned.

captum/attr/_core/neuron_integrated_gradients.py

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
class NeuronIntegratedGradients(NeuronAttribution):
99
def __init__(self, forward_func, layer, device_ids=None):
1010
r"""
11-
Args
11+
Args:
1212
1313
forward_func (callable): The forward function of the model or any
1414
modification of it
@@ -106,10 +106,11 @@ def attribute(
106106
processed in one batch.
107107
Default: None
108108
109-
Return:
110-
111-
attributions (tensor or tuple of tensors): Integrated gradients for
112-
particular neuron with respect to each input feature.
109+
Returns:
110+
*tensor* or tuple of *tensors* of **attributions**:
111+
- **attributions** (*tensor* or tuple of *tensors*):
112+
Integrated gradients for particular neuron with
113+
respect to each input feature.
113114
Attributions will always be the same size as the provided
114115
inputs, with each value providing the attribution of the
115116
corresponding input index.

captum/attr/_core/noise_tunnel.py

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -109,16 +109,18 @@ def attribute(
109109
For instance, such arguments include
110110
`additional_forward_args` and `baselines`.
111111
112-
Return:
113-
114-
attributions (tensor or tuple of tensors): Attribution with
112+
Returns:
113+
**attributions** or 2-element tuple of **attributions**, **delta**:
114+
- **attributions** (*tensor* or tuple of *tensors*):
115+
Attribution with
115116
respect to each input feature. attributions will always be
116117
the same size as the provided inputs, with each value
117118
providing the attribution of the corresponding input index.
118119
If a single tensor is provided as inputs, a single tensor is
119120
returned. If a tuple is provided for inputs, a tuple of
120121
corresponding sized tensors is returned.
121-
delta (float, optional): Approximation error computed by the
122+
- **delta** (*float*, returned if return_convergence_delta=True):
123+
Approximation error computed by the
122124
attribution algorithm. Not all attribution algorithms
123125
return delta value. It is computed only for some
124126
algorithms, e.g. integrated gradients.

0 commit comments

Comments
 (0)