Skip to content

Conversation

@ghost
Copy link

@ghost ghost commented May 16, 2020

The duration spent to compile dparse as a static library is the same but this reduces memory usage.
This change will also begin being interesting when the new CTFE implementation will come.

@codecov
Copy link

codecov bot commented May 16, 2020

Codecov Report

Merging #406 into master will decrease coverage by 0.05%.
The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #406      +/-   ##
==========================================
- Coverage   78.58%   78.52%   -0.06%     
==========================================
  Files           9        9              
  Lines        7382     7387       +5     
==========================================
  Hits         5801     5801              
- Misses       1581     1586       +5     
Impacted Files Coverage Δ
src/dparse/parser.d 91.00% <ø> (-0.13%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 78ed0d0...6827751. Read the comment docs.

@ghost
Copy link
Author

ghost commented May 16, 2020

If you want more conceptual explanations. let's say you have a template temp(T). Let the instance of temp!int. This instance is stored in a kind of hashmap so that the cost of making an instance for int only happens once.

The problem we are faced to here is that the cost of creating instances and making lookups in the template each time it's used again is not worth. It saves a very little, to the point that generating the code again does not make compiling slower. So instead, only CTFE is used which has the side effect to polute less the symbol tables and reduce the memory used.

@WebFreak001
Copy link
Member

WebFreak001 commented May 18, 2020

ok I did some small benchmarks how this affects compile time and memory usage with a simple touch src/dparse/ast.d && time dub build (always picking run with minimum time of 5 runs, LDC 1.20.1 on ivybridge):

Before: dub build time = 7.36s, peak memory = 831 MB
After: dub build time = 7.34s, peak memory = 826 MB

I don't know if that tiny improvement is really worth this change. Time in both cases was fairly stable with worst time just being exactly +0.05s in both cases (excluding first warmup run), memory was very stable in the MB range

@ghost
Copy link
Author

ghost commented May 19, 2020

Yes, i also see those 6 Megs in less using

for i in $(seq 1 5); do
  git checkout mixin-using-ctfe
  echo "LDC2 PR pass $i"
  /usr/bin/time -v dub build --build=release --compiler=ldc2 --force
  git checkout master
  echo "LDC2 MASTER pass $i"
  /usr/bin/time -v dub build --build=release --compiler=ldc2 --force
done

for i in $(seq 1 5); do
  git checkout mixin-using-ctfe
  echo "DMD PR pass $i"
  /usr/bin/time -v dub build --build=release --compiler=dmd --force
  git checkout master
  echo "DMD MASTER pass $i"
  /usr/bin/time -v dub build --build=release --compiler=dmd --force
done

As explained this is due to the fact that templates were not useful. They are used as a kind of cache, with the hope to reduce the cat (~) operations in CTFE, but this cache has for only effect to use more memory, although conceptually this was a good idea.

The duration spent to compile dparse as a static library is the same but this reduces memory usage when compiling.
This change will also begin being interesting when the new CTFE implementation will come.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant