Skip to content

Commit 98f0344

Browse files
authored
prototype_source/torchscript_freezing.py λ²ˆμ—­ (#788)
prototype_source/torchscript_freezing.py λ²ˆμ—­
1 parent 4658d55 commit 98f0344

1 file changed

Lines changed: 25 additions & 28 deletions

File tree

β€Žprototype_source/torchscript_freezing.pyβ€Ž

Lines changed: 25 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -1,23 +1,24 @@
11
"""
2-
Model Freezing in TorchScript
2+
TorchScript둜 λͺ¨λΈ λ™κ²°ν•˜κΈ°
33
=============================
4+
λ²ˆμ—­ : `κΉ€μ§€ν˜Έ <https://github.com/jiho3004/>`_
45
5-
In this tutorial, we introduce the syntax for *model freezing* in TorchScript.
6-
Freezing is the process of inlining Pytorch module parameters and attributes
7-
values into the TorchScript internal representation. Parameter and attribute
8-
values are treated as final values and they cannot be modified in the resulting
9-
Frozen module.
6+
이 νŠœν† λ¦¬μ–Όμ—μ„œλŠ”, TorchScript둜 *λͺ¨λΈ 동결* ν•˜λŠ” 문법을 μ†Œκ°œν•©λ‹ˆλ‹€.
7+
동결은 νŒŒμ΄ν† μΉ˜ λͺ¨λ“ˆμ˜ λ§€κ°œλ³€μˆ˜μ™€ 속성 값듀을 TorchScript λ‚΄λΆ€ ν‘œν˜„μœΌλ‘œ 인라이닝(inlining)ν•˜λŠ” κ³Όμ •μž…λ‹ˆλ‹€.
8+
λ§€κ°œλ³€μˆ˜μ™€ 속성 값듀은 μ΅œμ’… κ°’μœΌλ‘œ 처리되며 λ™κ²°λœ λͺ¨λ“ˆμ—μ„œ μˆ˜μ •λ  수 μ—†μŠ΅λ‹ˆλ‹€.
109
11-
Basic Syntax
10+
κΈ°λ³Έ 문법
1211
------------
13-
Model freezing can be invoked using API below:
12+
13+
λͺ¨λΈ 동결은 μ•„λž˜ APIλ₯Ό μ‚¬μš©ν•˜μ—¬ ν˜ΈμΆœν•  수 μžˆμŠ΅λ‹ˆλ‹€:
1414
1515
``torch.jit.freeze(mod : ScriptModule, names : str[]) -> SciptModule``
1616
17-
Note the input module can either be the result of scripting or tracing.
18-
See https://tutorials.pytorch.kr/beginner/Intro_to_TorchScript_tutorial.html
17+
μž…λ ₯ λͺ¨λ“ˆμ€ μŠ€ν¬λ¦½νŒ…(scripting) ν˜Ήμ€ 좔적(tracing)을 μ‚¬μš©ν•œ κ²°κ³Όμž…λ‹ˆλ‹€.
18+
`TorchScript μ†Œκ°œ νŠœν† λ¦¬μ–Ό <https://tutorials.pytorch.kr/beginner/Intro_to_TorchScript_tutorial.html>`_
19+
을 μ°Έμ‘°ν•˜μ„Έμš”.
1920
20-
Next, we demonstrate how freezing works using an example:
21+
λ‹€μŒμœΌλ‘œ, 예제λ₯Ό 톡해 동결이 μ–΄λ–€ λ°©μ‹μœΌλ‘œ λ™μž‘ν•˜λŠ”μ§€ ν™•μΈν•©λ‹ˆλ‹€:
2122
"""
2223

2324
import torch, time
@@ -58,17 +59,15 @@ def version(self):
5859

5960
try:
6061
print(fnet.conv1.bias)
61-
# without exception handling, prints:
62+
# μ˜ˆμ™Έ μ²˜λ¦¬κ°€ 없을 μ‹œ 'conv1' μ΄λΌλŠ” 이름과 ν•¨κ»˜ λ‹€μŒμ„ 좜λ ₯ν•©λ‹ˆλ‹€.
6263
# RuntimeError: __torch__.z.___torch_mangle_3.Net does not have a field
63-
# with name 'conv1'
6464
except RuntimeError:
6565
print("field 'conv1' is inlined. It does not exist in 'fnet'")
6666

6767
try:
6868
fnet.version()
69-
# without exception handling, prints:
69+
# μ˜ˆμ™Έ μ²˜λ¦¬κ°€ 없을 μ‹œ 'version' μ΄λΌλŠ” 이름과 ν•¨κ»˜ λ‹€μŒμ„ 좜λ ₯ν•©λ‹ˆλ‹€.
7070
# RuntimeError: __torch__.z.___torch_mangle_3.Net does not have a field
71-
# with name 'version'
7271
except RuntimeError:
7372
print("method 'version' is not deleted in fnet. Only 'forward' is preserved")
7473

@@ -108,27 +107,25 @@ def version(self):
108107
print("Frozen - Inference time: {0:5.2f}".format(end-start), flush =True)
109108

110109
###############################################################
111-
# On my machine, I measured the time:
110+
# 개인 λ¨Έμ‹ μ—μ„œ μ‹œκ°„μ„ μΈ‘μ •ν•œ κ²°κ³Όμž…λ‹ˆλ‹€:
112111
#
113112
# * Scripted - Warm up time: 0.0107
114113
# * Frozen - Warm up time: 0.0048
115114
# * Scripted - Inference: 1.35
116115
# * Frozen - Inference time: 1.17
117116

118117
###############################################################
119-
# In our example, warm up time measures the first two runs. The frozen model
120-
# is 50% faster than the scripted model. On some more complex models, we
121-
# observed even higher speed up of warm up time. freezing achieves this speed up
122-
# because it is doing some the work TorchScript has to do when the first couple
123-
# runs are initiated.
118+
# 이 μ˜ˆμ œμ—μ„œ, μ›Œλ°μ—… μ‹œκ°„μ€ 졜초 두 번 μ‹€ν–‰ν•  λ•Œ μΈ‘μ •ν•©λ‹ˆλ‹€.
119+
# λ™κ²°λœ λͺ¨λΈμ΄ 슀크립트된 λͺ¨λΈλ³΄λ‹€ 50% 더 λΉ λ¦…λ‹ˆλ‹€.
120+
# 보닀 λ³΅μž‘ν•œ λͺ¨λΈμ—μ„œλŠ” μ›Œλ°μ—… μ‹œκ°„μ΄ λ”μš± λΉ¨λΌμ§‘λ‹ˆλ‹€.
121+
# 졜초 두 번의 싀행을 μ΄ˆκΈ°ν™”ν•  λ•Œ TorchScriptκ°€ ν•΄μ•Ό ν•  일의 일뢀λ₯Ό 동결이 ν•˜κ³  있기 λ•Œλ¬Έμ— 속도 κ°œμ„ μ΄ μΌμ–΄λ‚©λ‹ˆλ‹€.
124122
#
125-
# Inference time measures inference execution time after the model is warmed up.
126-
# Although we observed significant variation in execution time, the
127-
# frozen model is often about 15% faster than the scripted model. When input is larger,
128-
# we observe a smaller speed up because the execution is dominated by tensor operations.
123+
# μΆ”λ‘  μ‹œκ°„μ€ λͺ¨λΈμ΄ μ›Œλ°μ—…λ˜κ³  λ‚œ λ’€, μΆ”λ‘  μ‹œ μ‹€ν–‰ μ‹œκ°„μ„ μΈ‘μ •ν•©λ‹ˆλ‹€.
124+
# μ‹€ν–‰ μ‹œκ°„μ— λ§Žμ€ νŽΈμ°¨κ°€ μžˆκΈ°λŠ” ν•˜μ§€λ§Œ, λŒ€κ°œ λ™κ²°λœ λͺ¨λΈμ΄ 슀크립트된 λͺ¨λΈλ³΄λ‹€ μ•½ 15% 더 λΉ λ¦…λ‹ˆλ‹€.
125+
# μ‹€ν–‰ μ‹œκ°„μ€ tensor 연산에 μ˜ν•΄ μ§€λ°°λ˜κΈ° λ•Œλ¬Έμ— μž…λ ₯의 크기가 더 컀지면 속도 κ°œμ„  μ •λ„λŠ” 더 μž‘μ•„μ§‘λ‹ˆλ‹€.
129126

130127
###############################################################
131-
# Conclusion
128+
# κ²°λ‘ 
132129
# -----------
133-
# In this tutorial, we learned about model freezing. Freezing is a useful technique to
134-
# optimize models for inference and it also can significantly reduce TorchScript warmup time.
130+
# 이 νŠœν† λ¦¬μ–Όμ—μ„œλŠ” λͺ¨λΈ 동결에 λŒ€ν•΄ λ°°μ› μŠ΅λ‹ˆλ‹€.
131+
# 동결은 μΆ”λ‘  μ‹œ λͺ¨λΈ μ΅œμ ν™”λ₯Ό ν•  수 μžˆλŠ” μœ μš©ν•œ 기법이며 TorchScript μ›Œλ°μ—… μ‹œκ°„μ„ 크게 μ€„μž…λ‹ˆλ‹€.

0 commit comments

Comments
Β (0)