OCamlPro Feed2024-10-11T08:12:13ZOCamlProcontact@ocamlpro.comCopyright (c) 2011–2021 OCamlProhttps://ocamlpro.com/blog/feedAlt-Ergo 2.6 is Out!https://ocamlpro.com/blog/2024_09_01_alt_ergo_2_6_0_released2024-09-30T08:12:13Z2024-09-30T08:12:13Z
Basile Clément
Pierre Villemot
We are excited to announce the release of Alt-Ergo 2.6! Alt-Ergo is an open-source automated prover used for formal verification in software development. It is part of the arsenal behind static analysis frameworks such as TrustInSoft Analyzer and Frama-C, and is one of the solvers behind Why3, a pla...<p></p>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/alt-ergo-8-colors-blank-bg.png">
<img alt="The Alt-Ergo 2.6 release comes with many enhancements!" src="/blog/assets/img/alt-ergo-8-colors-blank-bg.png"/>
</a>
<div class="caption">
The Alt-Ergo 2.6 release comes with many enhancements!
</div>
</p>
</div>
</p>
<p><strong>We are excited to announce the release of Alt-Ergo 2.6!</strong></p>
<p>Alt-Ergo is an open-source automated prover used for formal verification in
software development. It is part of the arsenal behind static analysis
frameworks such as TrustInSoft Analyzer and Frama-C, and is one of the
solvers behind Why3, a platform for deductive program verification. The newly
released version 2.6 brings new features and performance improvements.</p>
<p>Development on Alt-Ergo has accelerated significantly this past year, thanks to
the launch of the <a href="https://decysif.fr/en/">DéCySif</a> joint research project (i-Démo)
with AdaCore, Inria, OCamlPro and TrustInSoft. The improvements to bit-vectors
and algebraic data types in this release are sponsored by the Décysif project.</p>
<p>The highlights of Alt-Ergo 2.6 are:</p>
<ul>
<li>Support for reasoning and model generation with bit-vectors
</li>
<li>Model generation for algebraic data types
</li>
<li>Optimization with <code>(maximize)</code> and <code>(minimize)</code>
</li>
<li>FPA support is enabled by default and available in SMT-LIB format
</li>
<li>Binary releases now on GitHub
</li>
</ul>
<p>Alt-Ergo 2.6 also includes other improvements to the user interface (notably
the <code>set-option</code> SMT-LIB command), use of Dolmen as the default frontend for
SMT-LIB and native input, and many bug fixes.</p>
<h3>Bit-vectors</h3>
<p>In Alt-Ergo 2.5, we introduced built-in functions for the bit-vector
primitives from the SMT-LIB standard, but only provided limited reasoning
support. For Alt-Ergo 2.6, we set out to improve this reasoning support, and
have developed a new and improved relational theory for bit-vectors. This new
theory is based on an also new constraint propagation core that draws heavily
on the architecture of the Colibri solver (as in <a href="https://cea.hal.science/cea-01795779">Sharpening Constraint
Programming approaches for Bit-Vector Theory</a>), integrated into Alt-Ergo's
existing normalizing Shostak solver.</p>
<p>Bit-vectors are commonly used in verification of low-level code and in
cryptography, so improved support significantly enhances Alt-Ergo’s
applicability in these domains.</p>
<p>There are still areas of improvements, so please share any issue you encounter
with the bit-vector theory (or Alt-Ergo in general) via our
<a href="https://github.com/ocamlpro/alt-ergo/issues">issue tracker</a>.</p>
<p>To showcase improvements in Alt-Ergo 2.6, we compared it against the version
2.5 and industry-leading solvers Z3 and CVC5 on a dataset of bit-vector
problems collected from our partners in the DéCySif project. The (no BV)
variants for Alt-Ergo do not use the new bit-vector theory but instead an
axiomatization of bit-vector primitives provided by Why3. The percentages
represent the proportion of bit-vector problems solved successfully in each
configuration.</p>
<table class="table">
<thead>
<tr class="table-light text-center">
<th scope="col"></th>
<th scope="col" colspan="2">AE 2.5</th>
<th scope="col" colspan="2">AE 2.6</th>
<th scope="col">Z3 (4.12.5)</th>
<th scope="col">CVC5 (1.1.2)</th>
<th scope="col">Total</th>
</tr>
<tr>
<th scope="row"></th>
<td>(BV)</td>
<td>(no BV)</td>
<td>(BV)</td>
<td>(no BV)</td>
<td></td>
<td></td>
<td></td>
</tr>
</thead>
<tbody>
<tr>
<th scope="row">#</th>
<td>4128</td>
<td>4870</td>
<td>6265</td>
<td>4940</td>
<td>5482</td>
<td>7415</td>
<td>9038</td>
</tr>
<tr>
<th scope="row">%</th>
<td>46%</td>
<td>54%</td>
<td>69%</td>
<td>54%</td>
<td>61%</td>
<td>82%</td>
<td>100%</td>
</tr>
</tbody>
</table>
<p>As the table shows, Alt-Ergo 2.6 significantly outperforms version 2.5, and the
new built-in bit-vector theory outperforms Why3's axiomatization. We even
surpass Z3 on this benchmark, a testament to the new bit-vector theory in
Alt-Ergo 2.6.</p>
<h3>Model Generation</h3>
<p>Bit-vector is not the only theory Alt-Ergo 2.6 improves upon. Model generation
was introduced in Alt-Ergo 2.5 with support for booleans, integers, reals,
arrays, enumerated types, and records. Alt-Ergo 2.6 extends this support to
bit-vector and arbitrary algebraic data types, which means that model
generation is now enabled for all the theories supported by Alt-Ergo.</p>
<p>Model generation allows users to extract concrete examples or counterexamples,
aiding in debugging and verification of their systems.</p>
<p>Model generation is also more robust in Alt-Ergo 2.6, with numerous bug fixes
and improvements for edge cases.</p>
<h3>Optimization</h3>
<p>Alt-Ergo 2.6 introduces optimization capabilities, available via SMT-LIB input
using OptiSMT primitives such as <code>(minimize)</code> and <code>(maximize)</code> and compatible
with Z3 and OptiMathSat. Optimization allows guiding the solver towards simpler
and smaller counterexamples, helping users find more concrete and realistic
scenarios to trigger a bug.</p>
<p>See some
<a href="https://ocamlpro.github.io/alt-ergo/latest/Optimization.html">examples</a> in the
documentation.</p>
<h3>SMT-LIB command support</h3>
<p>Alt-Ergo 2.6 supports more SMT-LIB syntax and commands, such as:</p>
<ul>
<li>The <code>(get-info :all-statistics)</code> command to obtain information about the
solver's statistics
</li>
<li>The <code>(reset)</code>, <code>(exit)</code> and <code>(echo)</code> commands
</li>
<li>The <code>(get-assignment)</code> command, as well as the <code>:named</code> attribute and
<code>:produce-assignments</code> option
</li>
</ul>
<p>See the <a href="https://smt-lib.org">SMT-LIB standard</a> for more details about these
commands.</p>
<h3>Floating-point theory</h3>
<p>In this release, we have made Alt-Ergo's <a href="https://ocamlpro.github.io/alt-ergo/next/Alt_ergo_native/05_theories.html#floating-point-arithmetic">floating-point
theory</a>
enabled by default: there is no need to provide the <code>--enable-theories fpa</code>
flag anymore. The theory can be disabled with <code>--disable-theories fpa,nra,ria</code>
(the <code>nra</code> and <code>ria</code> theories were automatically enabled along with the <code>fpa</code>
theory in Alt-Ergo 2.5).</p>
<p>We have also made the floating-point primitives available in the SMT-LIB
format as the indexed constant <code>ae.round</code> and the convenience <code>ae.float16</code>,
<code>ae.float32</code>, <code>ae.float64</code> and <code>ae.float128</code> functions; see the
<a href="https://ocamlpro.github.io/alt-ergo/v2.6.0/SMT-LIB_language/index.html#floating-point-arithmetic">documentation</a>.</p>
<h3>Dolmen is the new default frontend</h3>
<p>Introduced in Alt-Ergo 2.5, the Dolmen frontend has been rigorously tested for
regressions and is now the default for both <code>.smt2</code> and <code>.ae</code> files; the
<code>--frontend dolmen</code> flag that was introduced in Alt-Ergo 2.5 is no longer
necessary.</p>
<p>The Dolmen frontend is based on the <a href="https://github.com/gbury/dolmen">Dolmen</a>
library developed by Guillaume Bury at OCamlPro. It provides excellent support
for the SMT-LIB standard and is used to check validity of all new problems in
the SMT-LIB benchmark collection, as well as the results of the annual SMT-LIB
affiliated solver competition SMT-COMP.</p>
<p>The preferred input format for Alt-Ergo is now the SMT-LIB format. The legacy
<code>.ae</code> format is still supported, but is now deprecated and users are
encouraged to migrate to the SMT-LIB format if possible. Please <a href="mailto:alt-ergo@ocamlpro.com">reach
out</a> if you find any issue while migrating to
the SMT-LIB format.</p>
<p>As we announced when releasing Alt-Ergo 2.5, the legacy frontend (supports
<code>.ae</code> files only) is deprecated in Alt-Ergo 2.6, but it can still be
enabled with the <code>--frontend legacy</code> option. It will be removed entirely from
Alt-Ergo 2.7.</p>
<p>Parser extensions, such as the built-in AB-Why3 plugin, only work with the
legacy frontend, and will no longer work with Alt-Ergo 2.7. We are not
aware of any current users of either parser extensions or the AB-Why3 plugin:
if you need these features, please reach out to us on
<a href="https://github.com/ocamlpro/alt-ergo/issues">GitHub</a> or by
<a href="mailto:alt-ergo@ocamlpro.com">email</a> so that we can figure out a path
forward.</p>
<h3>Use of <code>dune-site</code> for plugins</h3>
<p>Starting with Alt-Ergo 2.6, we are using the plugin mechanism from
<code>dune-site</code> to replace the custom plugin loading <code>Dynlink</code>. Plugins now need
to be registered in the <code>(alt-ergo plugins)</code> site with the
<a href="https://dune.readthedocs.io/en/stable/reference/dune/plugin.html"><code>plugin</code> stanza</a>.</p>
<p>This does not impact users, but only impacts developers of Alt-Ergo plugins. See the
<a href="https://github.com/OCamlPro/alt-ergo/blob/next/src/plugins/fm-simplex/dune">dune file</a>
for Alt-Ergo's built-in FM-Simplex plugin for reference.</p>
<h3>Binary releases on GitHub</h3>
<p>Starting with Alt-Ergo 2.6, we will be providing binary releases on the
<a href="https://github.com/ocamlpro/alt-ergo/releases">GitHub Releases</a> page for
Linux (x86_64) and macOS (x86_64 and arm). These are released under the
same <a href="https://ocamlpro.github.io/alt-ergo/latest/About/licenses/index.html">licensing conditions</a> as the Alt-Ergo source code.</p>
<p>The binary releases are statically linked and have no dependencies, except
for system dependencies on macOS. They do not support dynamically loading
plugins.</p>
<h3>Performance</h3>
<p>For Alt-Ergo 2.6, our main focus of improvement in term of reasoning was on
bit-vectors and algebraic data types. Other theories also benefit from
broader performance improvements we made. On our internal
problem dataset, Alt-Ergo 2.6 is about 5% faster than Alt-Ergo 2.5 on the goals
they both prove.</p>
<h3>And more!</h3>
<p>This release also includes significant internal refactoring, notably
a rewrite from scratch of the interval domain. This improves the
accuracy of Alt-Ergo in handling interval arithmetic and facilitates mixed
operations involving integers and bit-vectors, resulting in shorter and more
reliable proofs.</p>
<p>See the complete changelog
<a href="https://ocamlpro.github.io/alt-ergo/v2.6.0/About/changes.html">here</a>.</p>
<p>We encourage you to try out Alt-Ergo 2.6 and share your experience or any
feedback on our <a href="https://github.com/OCamlPro/Alt-Ergo">GitHub</a> or by email at
<a href="mailto:alt-ergo@ocamlpro.com">alt-ergo@ocamlpro.com</a>. Your input will help
share future releases!</p>
<h3>Acknowledgements</h3>
<p>We thank the <a href="https://alt-ergo.ocamlpro.com/#club">Alt-Ergo Users' Club</a> members: AdaCore, the CEA, Thales,
Mitsubishi Electric R&D Center Europe (MERCE) and TrustInSoft.</p>
<p>Special thanks to David Mentré and Denis Cousineau at MERCE for funding the
initial optimization work. MERCE has been a Member of the Alt-Ergo Users'
Club for four years. This partnership allowed Alt-Ergo to evolve and we hope
that more users will join the Club on our journey to make Alt-Ergo a must-have
tool.</p>
<div class="figure">
<div class="card-light blog-logos">
<img alt="AdaCore logo" src="/assets/img/logo_adacore.svg">
<img alt="CEA list logo" src="/blog/assets/img/cealist.png">
<img alt="Thales logo" style="height: 24px;" src="/assets/img/logo_thales.svg">
<img alt="Mitsubishi Electric logo" src="/assets/img/logo_merce.png">
<img alt="TrustInSoft logo" style="height: 32px;" src="/assets/img/logo_trustinsoft.svg">
</div>
<div class="caption">The dedicated members of our Alt-Ergo Club!</div>
</div>
Flambda2 Ep. 3: Speculative Inlining https://ocamlpro.com/blog/2024_08_09_the_flambda2_snippets_32024-08-09T08:12:13Z2024-08-09T08:12:13Z
Pierre Chambart
Vincent Laviron
Guillaume Bury
Dario Pinto
Nathanaëlle Courant
Welcome to a new episode of The Flambda2 Snippets! The F2S blog posts aim at gradually introducing the world to the inner-workings of a complex piece of software engineering: The Flambda2 Optimising Compiler for OCaml, a technical marvel born from a 10 year-long effort in Research & Development and ...<p></p>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/picture_egyptian_weighing_of_heart.jpg">
<img alt="A representation of Speculative Inlining through the famous Weighing Of The Heart of Egyptian Mythology. Egyptian God Anubis weighs his OCaml function, to see if it is worth inlining.<br />Credit: The Weighing of the Heart Ceremony, Ammit. Angus McBride (British, 1931-2007)" src="/blog/assets/img/picture_egyptian_weighing_of_heart.jpg"/>
</a>
<div class="caption">
A representation of Speculative Inlining through the famous Weighing Of The Heart of Egyptian Mythology. Egyptian God Anubis weighs his OCaml function, to see if it is worth inlining.<br />Credit: The Weighing of the Heart Ceremony, Ammit. Angus McBride (British, 1931-2007)
</div>
</p>
</div>
</p>
<h3>Welcome to a new episode of The Flambda2 Snippets!</h3>
<blockquote>
<p>The <strong>F2S</strong> blog posts aim at gradually introducing the world to the
inner-workings of a complex piece of software engineering: The <code>Flambda2 Optimising Compiler</code> for OCaml, a technical marvel born from a 10 year-long
effort in Research & Development and Compilation; with many more years of
expertise in all aspects of Computer Science and Formal Methods.</p>
</blockquote>
<p>Today's article will serve as an introduction to one of the key design
decisions structuring <code>Flambda2</code> that we will cover in the next episode in the
series: <code>Upward and Downward Traversals</code>.</p>
<p>See, there are interesting things to be said about how <code>inlining</code> is conducted
inside of our compiler. <code>Inlining</code> in itself is rather ubiquitous in compilers.
The goal here is to show how we approach <code>inlining</code>, and present what we call
<code>Speculative Inlining</code>.</p>
<p><div id="tableofcontents">
<strong>Table of contents</strong></p>
<ul>
<li><a href="#inliningingeneral">Inlining in general</a>
</li>
<li><a href="#detrimentalinlining">When inlining is detrimental</a>
</li>
<li><a href="#beneficialinlining">How to decide when inlining is beneficial</a>
</li>
<li><a href="#speculativeinlining">Speculative inlining</a>
</li>
<li><a href="#speculativeinlininginpractice">Speculative inlining in practice</a>
</li>
<li><a href="#summary">Summary</a>
</li>
<li><a href="#conclusion">Conclusion</a>
</li>
</ul>
<p></div></p>
<h2>
<a id="inliningingeneral" class="anchor"></a><a class="anchor-link" href="#inliningingeneral">Inlining in general</a>
</h2>
<p>Given the way people write functional programs, <strong>inlining</strong> is an important part
of the optimisation pipeline of such functional langages.</p>
<p>What we call <strong>inlining</strong> in this series is the process of duplicating some
code to specialise it to a specific context.</p>
<p>Usually, this can be thought as copy-pasting the body of a function at its
call site. A common misunderstanding is to think that the main benefit of this
optimisation is to remove the cost of the function call. However, with modern
computer architectures, this has become less and less relevant in the last
decades. The actual benefit is to use the specific context to trigger further
optimisations.</p>
<p>Suppose we have the following <code>option_map</code> and <code>double</code> functions:</p>
<pre><code class="language-ocaml">let option_map f x =
match x with
| None -> None
| Some x -> Some (f x)
let double i =
i + i
</code></pre>
<p>Additionally, suppose we are currently considering the following function:</p>
<pre><code class="language-ocaml">let stuff () =
option_map double (Some 21)
</code></pre>
<p>In this short example, inlining the <code>option_map</code> function would perform the
following transformation:</p>
<pre><code class="language-ocaml">let stuff () =
let f = double in
let x = Some 21 in
match x with
| None -> None
| Some x -> Some (f x)
</code></pre>
<p>Now we can inline the <code>double</code> function.</p>
<pre><code class="language-ocaml">let stuff () =
let x = Some 21 in
match x with
| None -> None
| Some x ->
Some (let i = x in i + i)
</code></pre>
<p>As you can see, inlining alone isn't that useful of an optimisation per se. In
this context, appliquing <code>Constant Propagation</code> will optimise and simplify it
to the following:</p>
<pre><code class="language-ocaml">let stuff () = Some 42
</code></pre>
<p>Although this is a toy example, combining small functions is a common pattern
in functional programs. It's very convenient that using combinators is <strong>not</strong>
significantly worse than writing this function by hand.</p>
<h2>
<a id="detrimentalinlining" class="anchor"></a><a class="anchor-link" href="#detrimentalinlining">When inlining is detrimental</a>
</h2>
<p>We cannot just go around and inline everything, everywhere... all at once.</p>
<p>As we said, inlining is mainly code duplication and that would be
detrimental and blow the size of the compiled code drastically. However, there
is a sweet spot to be found, between both absolute inlining and no inlining at
all, but it is hard to find.</p>
<p>Here's an example of exploding code at inlining time:</p>
<pre><code class="language-ocaml">(* val h : int -> int *)
let h n = (* Some non constant expression *)
(* val f : (int -> int) -> int -> int *)
let f g x = g (g x)
(* 4 calls to f -> 2^4 calls to h *)
let n = f (f (f (f h))) 42
</code></pre>
<p>Following through with the inlining process will produce a very large binary
relative to its source code. This contrived example highlights potential
problems that might arise in ordinary codebases in the wild, even if this one
is tailored to be <strong>quite nasty</strong> for inlining: notice the exponential blowup
in the number of nested calls, every additional call to <code>f</code> doubles the number
of calls to <code>h</code> after inlining.</p>
<h2>
<a id="beneficialinlining" class="anchor"></a><a class="anchor-link" href="#beneficialinlining">How to decide when inlining is beneficial</a>
</h2>
<p>Most compilers use a collection of heuristics to guide them in the decision
making. A good collection of heuristics is hard to both design, and fine-tune.
They also can be quite specific to a programming style and unfit for other
compilers to integrate. The take away is: <strong>there is no best way</strong>.</p>
<blockquote>
<p><strong>Side Note:</strong></p>
<p>This topic would make for an interesting blog post but,
unfortunately, rather remote from the point of this article. If you are
interested in going deeper into that subject right now, we have found
references for you to explore until we get around to writing a comprehensive,
and more digestable, explanation about the heuristic nature of inlining:</p>
<ul>
<li><a href="https://www.cambridge.org/core/services/aop-cambridge-core/content/view/8DD9A82FF4189A0093B7672193246E22/S0956796802004331a.pdf/secrets-of-the-glasgow-haskell-compiler-inliner.pdf"><strong>Secrets of the Glasgow Haskell Compiler inliner</strong>, <em>by SIMON PEYTON JONES and SIMON MARLOW, 2002</em></a>.
</li>
<li><a href="https://web.archive.org/web/20010615153947/https://www.cs.indiana.edu/~owaddell/papers/thesis.ps.gz"><strong>Extending the Scope of Syntactic Abstraction</strong>, <em>by OSCAR WADDELL, 1999. Section 4.4</em> (<strong>PDF Download link</strong>)</a>, for the case of Scheme.
</li>
<li><a href="https://dl.acm.org/doi/10.1145/182409.182489"><strong>Towards Better Inlining Decisions Using Inlining Trials</strong>, <em>by JEFFREY DEAN and CRAIG CHAMBERS, 1994</em></a>.
</li>
<li><a href="https://ethz.ch/content/dam/ethz/special-interest/infk/ast-dam/documents/Theodoridis-ASPLOS22-Inlining-Paper.pdf"><strong>Understanding and Exploiting Optimal Function Inlining</strong>, <em>by THEODOROS THEODORIDIS, TOBIAS GROSSER, ZHENDONG SU, 2022</em></a>.
</li>
</ul>
</blockquote>
<p>Before we get to a concrete example, and break down <code>Speculative Inlining</code> for
you, we would like to discuss the trade-offs of duplicating code.</p>
<p>CPUs execute instructions one by one, or at least they pretend that they do. In
order to execute an instruction, they need to load up into memory both code and
data. In modern CPUs, most instructions take only a few cycles to execute and
in practice, the CPUs often execute several at the same time. To put into
perspective, loading memory, however, in the worst case, can take hundreds of
CPU cycles... Most of the time it's not the case because CPUs have complex
memory cache hierarchies such that loading from instruction cache can take just
a few cycles, loading from level 2 caches may take dozens of them, and the
worst case is loading from main memory which can take hundreds of cycles.</p>
<p>The take away is, when executing a program, the cost of one instruction
that has to be loaded from main memory can be
<a href="https://norvig.com/21-days.html#answers">larger</a> than the cost of executing a
hundred instructions in caches.</p>
<p>There is a way to avoid the worst case scenario. Since caches are rather small
in size, the main component to keeping from loading from main memory is to keep
your program rather small, or at least the parts of it that are regularly
executed.</p>
<p>Keep these orders of magnitude in mind when we address the trade-offs between
improving the number of instructions that we run and keeping the program to a
reasonably small size.</p>
<hr />
<p>Before explaining <code>Speculative Inlining</code> let's consider a piece of code.</p>
<p>The following pattern is quite common in OCaml and other functional languages,
let's see how one would go about inlining this code snippet.</p>
<p><strong>Example 1:</strong> Notice the higher-order function <code>f</code>:</p>
<pre><code class="language-ocaml">(*
val f :
(condition:bool -> int -> unit)
-> condition:bool
-> int
-> unit
*)
let f g ~condition n =
for i = 0 to n do
g ~condition i
done
let g_real ~condition i =
if condition then
(* small operation *)
else
(* big piece of code *)
let condition = true
let foo n =
f g_real ~condition n
</code></pre>
<p>Even for such a small example we will see that the heuristics involved to finding
the right solution can become quite complex.</p>
<p>Keeping in mind the fact that <code>condition</code> is always <code>true</code>, the best set of
inlining decisions would yield the following code:</p>
<pre><code class="language-ocaml">(* All the code before [foo] is kept as is, from the previous codeblock *)
let foo x =
for i = 0 to x do
(* small operation *)
done
</code></pre>
<p>But if <code>condition</code> had been always <code>false</code>, instead of <code>small operation</code>, we
would have had a big chunk of <code>g_real</code> duplicated in <code>foo</code> (i.e: <code>(* big piece of code *)</code>). Moreover it would
have only spared us the running time of a few <code>call</code> instructions. Therefore,
we would have probably preferred to have kept ourselves from inlining anything.</p>
<p>Specifically, we would have liked to have stopped from inlining <code>g</code>, as well as
to have avoided inlining <code>f</code> because it would have needlessly increased the
size of the code with no substantial benefit.</p>
<p>However, if we want to be able to take an educated decision based on the value
of <code>condition</code>, we will have to consider the entirety of the code relevant to
that choice. Indeed, if we just look at the code for <code>f</code>, or its call site in
<code>foo</code>, nothing would guide us to the right decision. In order to take the
right decision, we need to understand that if the <code>~condition</code> parameter to the
<code>g_real</code> function is <code>true</code>, then we can remove a <strong>large</strong> piece of code,
namely: the <code>else</code> branch and the condition check as well.</p>
<p>But to understand that the <code>~condition</code> in <code>g_real</code> is always <code>true</code>, we need
to see it in the context of <code>f</code> in <code>foo</code>. This implies again that, that choice
of inlining is not based on a property of <code>g_real</code> but rather a property of the
context of its call.</p>
<p>There exists a <strong>very large</strong> number of combinations of such difficult
situations that would each require <strong>different</strong> heuristics which would be
incredibly tedious to design, implement, and maintain.</p>
<h2>
<a id="speculativeinlining" class="anchor"></a><a class="anchor-link" href="#speculativeinlining">Speculative inlining</a>
</h2>
<p>We manage to circumvent the hurdle that this decision problem represents
thanks to what we call <code>Speculative Inlining</code>. This strategy requires two
properties from the compiler: the ability to inline and optimise at the same
time, as well as being able to backtrack inlining decisions.</p>
<p>Lets look at <strong>Example 1</strong> again and look into the <code>Speculative Inlining</code>
strategy.</p>
<pre><code class="language-ocaml">let f g ~condition n =
for i = 0 to n do
g ~condition i
done
let g_real ~condition x =
if condition then
(* small operation *)
else
(* big piece of code *)
let condition = true
let foo x =
f g_real ~condition x
</code></pre>
<p>We will focus only on the traversal of the <code>foo</code> function.</p>
<p>Before we try and inline anything, there are a couple things we have to keep in
mind about values and functions in OCaml:</p>
<ol>
<li><strong>Application arity may not match function arity</strong>
</li>
</ol>
<p>To give you an idea, the function <code>foo</code> could also been written in the
following way:</p>
<pre><code class="language-ocaml">let foo x =
let f1 = f in
let f2 = f1 g_real in
let f3 = f2 ~condition in
f3 x
</code></pre>
<p>We expect the compiler to translate it as well as the original, but we cannot
inline a function unless all its arguments are provided. To solve this, we need
to handle partial applications precisely. Over-applications also present
similar challenges.</p>
<ol start="2">
<li><strong>Functions are values in OCaml</strong>
</li>
</ol>
<p>We have to understand that the call to <code>f</code> in <code>foo</code> is <strong>not</strong> trivially a
direct call to <code>f</code> in this context. Indeed, at this point functions could
instead be stored in pairs, or lists, or even hashtables, to be later retrieved
and applied at will, and we call such functions <strong>general functions</strong>.</p>
<p>Since our goal is to inline it, we <strong>need</strong> to know the body of the function. We
call a function <strong>concrete</strong> when we have knowledge of its body. This entails
<a href="https://en.wikipedia.org/wiki/Constant_folding"><code>Constant Propagation</code></a>
in order to associate a <strong>concrete</strong> function to <strong>general</strong> function values and,
consequently, be able to simplify it while inlining.</p>
<p>Here's the simplest case to demonstrate the importance of <code>Constant Propagation</code>.</p>
<pre><code class="language-ocaml">let foo_bar y =
let pair = foo, y in
(fst pair) (snd pair)
</code></pre>
<p>In this case, we have to look inside the pair in order to find the function,
this demonstrates that we sometimes have to do some amount of <strong>value analysis</strong> in
order to proceed. It's quite common to come across such cases in OCaml programs
due to the module system and other functional languages present similar
characteristics.</p>
<p>There are many scenarios which also require a decent amount of context in order
to identify which function should be called. For example, when a function
passed as parameter is called, we need to know the context of the caller
function<strong>s</strong>, sometimes up to an arbitrarily large context. Analysing the
relevant context will tell us which function is being called and thus help
us make educated inlining decisions. This problem is specific to functional
languages, functions in good old imperative languages are seldom ambiguous;
even though such considerations would be relevant when function pointers are
involved.</p>
<p>This small code snippet shows us that we <strong>have</strong> to inline some functions in
order to know whether we should have inlined them.</p>
<h3>
<a id="speculativeinlininginpractice" class="anchor"></a><a class="anchor-link" href="#speculativeinlininginpractice">Speculative inlining in practice</a>
</h3>
<p>In practice, <code>Speculative Inlining</code> is being able to quantify the benefits
brought by a set of optimisations, which have to be applied after a given
inlining decision, and use these results to determine if said inlining decision
is in fact worth to carry out all things considered.</p>
<p>The criteria for accepting an inlining decision is that the resulting code
<strong>should be</strong> faster that the original one. We use <em>"should be"</em> because
program speed cannot be fully understood with absolutes.</p>
<p>That's why we use a heuristic algorithm in order to compare the original and
the optimised versions of the code. It roughly consists in counting the number
of retired (executed) instructions and comparing it to the increase in code
size introduced by inlining the body of that function. The value of that
cut-off ratio is by definition heuristic and different compilation options
given to <code>ocamlopt</code> change it.</p>
<p>As said previously, we cannot go around and evaluate each inlining decision
independently because there are cases where inlining a function allows for more
of them to happen, and sometimes a given inlining choice validates another one.
We can see this in <strong>Example 1</strong>, where deciding <strong>not</strong> to inline function
<code>g_real</code> would make the inlining of function <code>f</code> useless.</p>
<p>Naturally, every combination of inlining decision cannot be explored
exhaustively. We can only explore a small subset of them, and for that we have
another heuristic that was already used in <code>Flambda1</code>, although <code>Flambda2</code> does
not yet implement it in full.</p>
<p>It's quite simple: we choose to consider inlining decision relationships only
when there are nested calls. As for any other heuristic, it does not cover
every useful case, but not only is it the easiest to implement, we are also
fairly confident that it covers the most important cases.</p>
<p>Here's a small rundown of that heuristic:</p>
<ul>
<li><code>A</code> is a function which calls <code>B</code>
<ul>
<li><strong>Case 1</strong>: we evaluate the body of <code>A</code> at its definition, possibly inlining
<code>B</code> in the process
</li>
<li><strong>Case 2</strong>: at a specific callsite of <code>A</code>, we evaluate <code>A</code> in the inlining
context.
<ul>
<li><strong>Case 2.a</strong>: inlining <code>A</code> is beneficial no matter the decision on <code>B</code>, so we
do it.
</li>
<li><strong>Case 2.b</strong>: inlining <code>A</code> is potentially detrimental, so we go and evaluate
<code>B</code> before deciding to inline <code>A</code> for good.
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p>Keep in mind that case <strong>2.b</strong> is recursive and can go arbitrarily deep. This
amounts to looking for the best leaf in the decision tree. Since we can't
explore the whole tree, we do have a some limit to the depth of the
exploration.</p>
<blockquote>
<p><strong>Reminder for our fellow Cameleers</strong>: <code>Flambda1</code> and <code>Flambda2</code> have a flag
you can pass through the CLI which will generate a <code>.org</code> file which will
detail all the inlining decisions taken by the compiler. That flag is:
<code>-inlining-report</code>. Note that <code>.org</code> files allow to easily visualise a
decision tree inside of the Emacs editor.</p>
</blockquote>
<h2>
<a id="summary" class="anchor"></a><a class="anchor-link" href="#summary">Summary</a>
</h2>
<p>By now, you should have a better understanding of the intricacies inherent to
<code>Speculative Inlining</code>. Prior to its initial inception, it was fair to question
how feasible (and eligible, considering the many requirements for developping a
compiler), such an algorithm would be in practice. Since then, it has
demonstrated its usefulness in <code>Flambda1</code> and, consequently, its porting to
<code>Flambda2</code> was called for.</p>
<p>So before we move on to the next stop in the
<a href="/blog/2024_03_18_the_flambda2_snippets_0#listing"><strong>F2S</strong></a> series, lets
summarize what we know of <code>Speculative Inlining</code>.</p>
<p>We learned that <strong>inlining</strong> is the process of copying the body of a function at
its callsite. We also learned that it is not a very interesting transformation by
itself, especially nowadays with how efficient modern CPUs are, but that its
usefulness is found in how it <strong>facilitates other optimisations</strong> to take place
later.</p>
<p>We also learned about the <strong>heuristic</strong> nature of inlining and how it would be
difficult to maintain finely-tailored heuristics in the long run as many others
have tried before us. Actually, it is because <strong>there is no best way</strong> that we
have come up with the need for an algorithm that is capable of simultaneously
performing <strong>inlining</strong> and <strong>optimising</strong> as well as <strong>backtracking</strong> when needed
which we called <code>Speculative Inlining</code>. In a nutshell, <code>Speculative Inlining</code>
is one of the algorithms of the optimisation framework of <code>Flambda2</code> which
facilitates other optimisations to take place.</p>
<p>We have covered the constraints that the algorithm has to respect for it to
hold ground in practice, like <strong>performance</strong>. We value a fast compiler and aim
to keep both its execution but also the code it generates to be so. Take an
optimisation such as <code>Constant Propagation</code> as an example.
It would be a <em>naïve</em> approach to try and perform this transformation
everywhere because the resulting complexity of the compiler would amount to
something like <code>size_of_the_code * number_of_inlinings_performed</code> which is
unacceptable to say the least. We aim at making the complexity of our compiler
linear to the code size, which in turn entails plenty of <strong>logarithms</strong> anytime
it is possible. Instead, we choose to apply any transformation only in the
inlined parts of the code.</p>
<p>With all these parameters in mind, can we imagine ways to tackle these
<strong>multi-layered challenges</strong> all at the same time ? There are solutions out there
that do so in an imperative manner. In fact, the most intuitive way to
implement such an algorithm may be fairly easily done with imperative code. You
may want to read about <code>Equality Saturation</code> for instance, or even <a href="http://www-sop.inria.fr/members/Manuel.Serrano/publi/serrano-plilp97.ps.gz">download
Manuel Serrano's Paper inside the Scheme Bigloo
compiler</a>
to learn more about it. However, we require backtracking, and the nested
nature of these transformations (inlining, followed by different optimising
transformations) <strong>would make backtracking bug-prone and tedious to
maintain</strong> if it was to be written imperatively.</p>
<p>It soon became evident for us that we were going to leverage one of the key
characteristics of functional languages in order to make this whole ordeal
easier to design, implement and maintain: <strong>purity of terms</strong>. Indeed, not only is
it easier to support backtracking when manipulating <strong>pure</strong> code, but it also
becomes impossible for us to introduce cascades of hard to detect nested
bugs by avoiding transforming code <strong>in place</strong>. From this point on, we knew we
had to perform all transformations at the same time, making our inlining
function one that would return an <strong>optimised inlined function</strong>. This does
introduce complexities that we have chosen over the hurdles of maintaining an
imperative version of that same algorithm, which can be seen as pertaining to
<code>graph traversal</code> and <code>tree rewriting</code> for all intents and purposes.</p>
<p>Despite the density of this article, keep in mind that we aim at explaining
<code>Flambda2</code> in the most comprehensive manner possible and that there are
voluntary shortcuts taken throughout these snippets for all of this to make
sense for the broader audience.
In time, these articles will go deep into the guts of the compiler and by then,
hopefully, we will have done a good job at providing our readers with all
necessary information for all of you to continue enjoying this rabbit-hole with
us!</p>
<p>Here's a pseudo-code snippet representing <code>Speculative Inlining</code>.</p>
<pre><code class="language-ocaml">(* Pseudo-code to rpz the actual speculation *)
let try_inlining f env args =
let inlined_version_of_f = inline f env args in
let benefit = compare inlined_version_of_f f in
if benefit > 0 then
inlined_version_of_f
else
f
</code></pre>
<h2>
<a id="conclusion" class="anchor"></a><a class="anchor-link" href="#conclusion">Conclusion</a>
</h2>
<p>As we said at the start of this article, this one is but an introduction to a
major topic we will cover next, namely: <code>Upwards and Downwards Traversals</code>.</p>
<p>We had to cover <code>Speculative Inlining</code> first. It is a reasonably approachable
solution to a complex problem, and having an idea of all the requirements for
its good implementation is half of the work done for understanding key design
decisions such as how code traversal was designed for algorithms such as
<code>Speculative Inlining</code> to hold out.</p>
<hr />
<p><strong>Thank you all for reading! We hope that these articles will keep the
community hungry for more!</strong></p>
<p><strong>Until next time, keep calm and OCaml!</strong>
<a href="https://egypt-museum.com/the-weighing-of-the-heart-ceremony/">⚱️🐫🏺📜</a></p>
opam 2.2.0 release!https://ocamlpro.com/blog/2024_07_01_opam_2_2_0_releases2024-07-01T08:12:13Z2024-07-01T08:12:13Z
Raja Boujbel - OCamlPro
Kate Deplaix - Ahrefs
David Allsopp - Tarides
Feedback on this post is welcomed on Discuss! We are very pleased to announce the release of opam 2.2.0, and encourage all users to upgrade. Please read on for installation and upgrade instructions. NOTE: this article is cross-posted on opam.ocaml.org and ocamlpro.com, and published in discuss.ocaml...<p><em>Feedback on this post is welcomed on <a href="https://discuss.ocaml.org/t/ann-opam-2-2-0-is-out/14893">Discuss</a>!</em></p>
<p>We are very pleased to announce the release of opam 2.2.0, and encourage all users to upgrade. Please read on for installation and upgrade instructions.</p>
<blockquote>
<p>NOTE: this article is cross-posted on <a href="https://opam.ocaml.org/blog/">opam.ocaml.org</a> and <a href="/blog">ocamlpro.com</a>, and published in <a href="https://discuss.ocaml.org/t/ann-opam-2-2-0-is-out/14893">discuss.ocaml.org</a>.</p>
</blockquote>
<h2>Try it!</h2>
<p>In case you plan a possible rollback, you may want to first backup your
<code>~/.opam</code> or <code>$env:LOCALAPPDATAopam</code> directory.</p>
<p>The upgrade instructions are unchanged:</p>
<ol>
<li>Either from binaries: run
</li>
</ol>
<p>For Unix systems</p>
<pre><code class="language-shell-session">bash -c "sh <(curl -fsSL https://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh) --version 2.2.0"
</code></pre>
<p>or from PowerShell for Windows systems</p>
<pre><code class="language-shell-session">Invoke-Expression "& { $(Invoke-RestMethod https://raw.githubusercontent.com/ocaml/opam/master/shell/install.ps1) }"
</code></pre>
<p>or download manually from <a href="https://github.com/ocaml/opam/releases/tag/2.2.0">the Github "Releases" page</a> to your PATH.</p>
<ol start="2">
<li>Or from source, manually: see the instructions in the <a href="https://github.com/ocaml/opam/tree/2.2.0#compiling-this-repo">README</a>.
</li>
</ol>
<p>You should then run:</p>
<pre><code class="language-shell-session">opam init --reinit -ni
</code></pre>
<h2>Changes</h2>
<h3>Major change: Windows support</h3>
<p>After 8 years' effort, opam and opam-repository now have official native Windows
support! A big thank you is due to Andreas Hauptmann (<a href="https://github.com/fdopen">@fdopen</a>),
whose <a href="https://github.com/fdopen/godi-repo">WODI</a> and <a href="https://fdopen.github.io/opam-repository-mingw/">OCaml for Windows</a>
projects were for many years the principal downstream way to obtain OCaml on
Windows, Jun Furuse (<a href="https://github.com/camlspotter">@camlspotter</a>) whose
<a href="https://inbox.vuxu.org/caml-list/CAAoLEWsQK7=qER66Uixx5pq4wLExXovrQWM6b69_fyMmjYFiZA@mail.gmail.com/">initial experimentation with OPAM from Cygwin</a>
formed the basis of opam-repository-mingw, and, most recently,
Jonah Beckford (<a href="https://github.com/JonahBeckford">@jonahbeckford</a>) whose
<a href="https://diskuv.com/dkmlbook/">DkML</a> distribution kept - and keeps - a full
development experience for OCaml available on Windows.</p>
<p>OCaml when used on native Windows requires certain tools from the Unix world
which are provided by either <a href="https://cygwin.com">Cygwin</a> or <a href="https://msys2.org">MSYS2</a>.
We have engineered <code>opam init</code> so that it is possible for a user not to need to
worry about this, with <code>opam</code> managing the Unix world, and the user being able
to use OCaml from either the Command Prompt or PowerShell. However, for the Unix
user coming over to Windows to test their software, it is also possible to have
your own Cygwin/MSYS2 installation and use native Windows opam from that. Please
see the <a href="https://opam.ocaml.org/blog/opam-2-2-0-windows/">previous blog post</a>
for more information.</p>
<p>There are two "ports" of OCaml on native Windows, referred to by the name of
provider of the C compiler. The mingw-w64 port is <a href="https://www.mingw-w64.org/">GCC-based</a>.
opam's external dependency (depext) system works for this port (including
providing GCC itself), and many packages are already well-supported in
opam-repository, thanks to the previous efforts in <a href="https://github.com/fdopen/opam-repository-mingw">opam-repository-mingw</a>.
The MSVC port is <a href="https://visualstudio.microsoft.com/">Visual Studio-based</a>. At
present, there is less support in this ecosystem for external dependencies,
though this is something we expect to work on both in opam-repository and in
subsequent opam releases. In particular, it is necessary to install
Visual Studio or Visual Studio BuildTools separately, but opam will then
automatically find and use the C compiler from Visual Studio.</p>
<h3>Major change: opam tree / opam why</h3>
<p><code>opam tree</code> is a new command showing packages and their dependencies with a tree view.
It is very helpful to determine which packages bring which dependencies in your installed switch.</p>
<pre><code class="language-shell-session">$ opam tree cppo
cppo.1.6.9
├── base-unix.base
├── dune.3.8.2 (>= 1.10)
│ ├── base-threads.base
│ ├── base-unix.base [*]
│ └── ocaml.4.14.1 (>= 4.08)
│ ├── ocaml-base-compiler.4.14.1 (>= 4.14.1~ & < 4.14.2~)
│ └── ocaml-config.2 (>= 2)
│ └── ocaml-base-compiler.4.14.1 (>= 4.12.0~) [*]
└── ocaml.4.14.1 (>= 4.02.3) [*]
</code></pre>
<p>Reverse-dependencies can also be displayed using the new <code>opam why</code> command.
This is useful to examine how dependency versions get constrained.</p>
<pre><code class="language-shell-session">$ opam why cmdliner
cmdliner.1.2.0
├── (>= 1.1.0) b0.0.0.5
│ └── (= 0.0.5) odig.0.0.9
├── (>= 1.1.0) ocp-browser.1.3.4
├── (>= 1.0.0) ocp-indent.1.8.1
│ └── (>= 1.4.2) ocp-index.1.3.4
│ └── (= version) ocp-browser.1.3.4 [*]
├── (>= 1.1.0) ocp-index.1.3.4 [*]
├── (>= 1.1.0) odig.0.0.9 [*]
├── (>= 1.0.0) odoc.2.2.0
│ └── (>= 2.0.0) odig.0.0.9 [*]
├── (>= 1.1.0) opam-client.2.2.0~alpha
│ ├── (= version) opam.2.2.0~alpha
│ └── (= version) opam-devel.2.2.0~alpha
├── (>= 1.1.0) opam-devel.2.2.0~alpha [*]
├── (>= 0.9.8) opam-installer.2.2.0~alpha
└── user-setup.0.7
</code></pre>
<blockquote>
<p>Special thanks to <a href="https://github.com/cannorin">@cannorin</a> for contributing this feature.</p>
</blockquote>
<h3>Major change: with-dev-setup</h3>
<p>There is now a way for a project maintainer to share their project development
tools: the <code>with-dev-setup</code> dependency flag. It is used in the same way as
<code>with-doc</code> and <code>with-test</code>: by adding a <code>{with-dev-setup}</code> filter after a
dependency. It will be ignored when installing normally, but it's pulled in when the
package is explicitly installed with the <code>--with-dev-setup</code> flag specified on
the command line.</p>
<p>For example</p>
<pre><code class="language-shell-session">opam-version: "2.0"
depends: [
"ocaml"
"ocp-indent" {with-dev-setup}
]
build: [make]
install: [make "install"]
post-messages:
[ "Thanks for installing the package"
"as well as its development setup. It will help with your future contributions" {with-dev-setup} ]
</code></pre>
<h3>Major change: opam pin --recursive</h3>
<p>When pinning a package using <code>opam pin</code>, opam looks for opam files in the root directory only.
With recursive pinning, you can now instruct opam to look for <code>.opam</code> files in
subdirectories as well, while maintaining the correct relationship between the <code>.opam</code>
files and the package root for versioning and build purposes.</p>
<p>Recursive pinning is enabled by the following options to <code>opam pin</code> and <code>opam install</code>:</p>
<ul>
<li>With <code>--recursive</code>, opam will look for <code>.opam</code> files recursively in all subdirectories.
</li>
<li>With <code>--subpath <path></code>, opam will only look for <code>.opam</code> files in the subdirectory <code><path></code>.
</li>
</ul>
<p>The two options can be combined: for instance, if your opam packages are stored
as a deep hierarchy in the <code>mylib</code> subdirectory of your project you can try
<code>opam pin . --recursive --subpath mylib</code>.</p>
<p>These options are useful when dealing with a large monorepo-type repository with many
opam libraries spread about.</p>
<h3>New Options</h3>
<ul>
<li>
<p><code>opam switch -</code>, inspired by <code>git switch -</code>, makes opam switch back to the previously
selected global switch.</p>
</li>
<li>
<p><code>opam pin --current</code> fixes a package to its current state (disabling pending
reinstallations or removals from the repository). The installed package will
be pinned to its current installed state, i.e. the pinned opam file is the
one installed.</p>
</li>
<li>
<p><code>opam pin remove --all</code> removes all the pinned packages from a switch.</p>
</li>
<li>
<p><code>opam exec --no-switch</code> removes the opam environment when running a command.
It is useful when you want to launch a command without opam environment changes.</p>
</li>
<li>
<p><code>opam clean --untracked</code> removes untracked files interactively remaining
from previous packages removal.</p>
</li>
<li>
<p><code>opam admin add-constraint <cst> --packages pkg1,pkg2,pkg3</code> applies the given constraint
to a given set of packages</p>
</li>
<li>
<p><code>opam list --base</code> has been renamed into <code>--invariant</code>, reflecting the fact that since opam 2.1 the "base" packages of a switch are instead expressed using a switch invariant.</p>
</li>
<li>
<p><code>opam install --formula <formula></code> installs a formula instead of a list of packages. This can be useful if you would like to install one package or another one. For example <code>opam install --formula '"extlib" |"extlib-compat"'</code> will install either <code>extlib</code> or <code>extlib-compat</code> depending on what's best for the current switch.</p>
</li>
</ul>
<h3>Miscellaneous changes</h3>
<ul>
<li>The UI now displays a status when extracting an archive or reloading a repository
</li>
<li>Overhauled the implementation of <code>opam env</code>, fixing many corner cases for environment updates and making the reverting of package environment variables precise. As a result, using <code>setenv</code> in an opam file no longer triggers a lint warning.
</li>
<li>Fix parsing pre-opam 2.1.4 switch import files containing extra-files
</li>
<li>Add a new <code>sys-ocaml-system</code> default global eval variable
</li>
<li>Hijack the <code>"%{var?string-if-true:string-if-false-or-undefined}%"</code> syntax to
support extending the variables of packages with <code>+</code> in their name
(<code>conf-c++</code> and <code>conf-g++</code> already exist) using <code>"%{?pgkname:var:}%"</code>
</li>
<li>Fix issues when using fish as shell
</li>
<li>Sandbox: Mark the user temporary directory
(as returned by <code>getconf DARWIN_USER_TEMP_DIR</code>) as writable when TMPDIR
is not defined on macOS
</li>
<li>Add Warning 69: Warn for new syntax when package name in variable in string
interpolation contains several '+' (this is related to the "hijack" item above)
</li>
<li>Add support for Wolfi OS, treating it like Alpine family as it also uses apk
</li>
<li>Sandbox: <code>/tmp</code> is now writable again, restoring POSIX compliance
</li>
<li>Add a new <code>opam admin: new add-extrafiles</code> command to add/check/update the <code>extra-files:</code> field according to the files present in the <code>files/</code> directory
</li>
<li>Add a new <code>opam lint -W @1..9</code> syntax to allow marking a set of warnings as errors
</li>
<li>Fix bugs in the handling of the <code>OPAMCURL</code>, <code>OPAMFETCH</code> and <code>OPAMVERBOSE</code> environment variables
</li>
<li>Fix bugs in the handling of the <code>--assume-built</code> argument
</li>
<li>Software Heritage fallbacks is now supported, but is disabled-by-default for now. For more information you can read one of our <a href="https://opam.ocaml.org/blog/opam-2-2-0-alpha/#Software-Heritage-Binding">previous blog post</a>
</li>
</ul>
<p>And many other general and performance improvements were made and bugs were fixed.
You can take a look to previous blog posts.
API changes and a more detailed description of the changes are listed in:</p>
<ul>
<li><a href="https://github.com/ocaml/opam/releases/tag/2.2.0-alpha">the release note for 2.2.0~alpha</a>
</li>
<li><a href="https://github.com/ocaml/opam/releases/tag/2.2.0-alpha2">the release note for 2.2.0~alpha2</a>
</li>
<li><a href="https://github.com/ocaml/opam/releases/tag/2.2.0-alpha3">the release note for 2.2.0~alpha3</a>
</li>
<li><a href="https://github.com/ocaml/opam/releases/tag/2.2.0-beta1">the release note for 2.2.0~beta1</a>
</li>
<li><a href="https://github.com/ocaml/opam/releases/tag/2.2.0-beta2">the release note for 2.2.0~beta2</a>
</li>
<li><a href="https://github.com/ocaml/opam/releases/tag/2.2.0-beta3">the release note for 2.2.0~beta3</a>
</li>
<li><a href="https://github.com/ocaml/opam/releases/tag/2.2.0-rc1">the release note for 2.2.0~rc1</a>
</li>
<li><a href="https://github.com/ocaml/opam/releases/tag/2.2.0">the release note for 2.2.0</a>
</li>
</ul>
<p>This release also includes PRs improving the documentation and improving
and extending the tests.</p>
<p>Please report any issues to <a href="https://github.com/ocaml/opam/issues">the bug-tracker</a>.</p>
<p>We hope you will enjoy the new features of opam 2.2! 📯</p>
Flambda2 Ep. 2: Loopifying Tail-Recursive Functionshttps://ocamlpro.com/blog/2024_05_07_the_flambda2_snippets_22024-05-07T08:12:13Z2024-05-07T08:12:13Z
Nathanaëlle Courant
Guillaume Bury
Pierre Chambart
Vincent Laviron
Dario Pinto
Welcome to a new episode of The Flambda2 Snippets! Today's topic is Loopify, one of Flambda2's many optimisation algorithms which specifically deals with optimising both purely tail-recursive and/or functions annotated with the [@@loop] attribute in OCaml. A lazy explanation for its utility would be...<p></p>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/F2S_loopify_figure.png">
<img alt="Two camels are taking a break from crossing the desert, they know their path could not have been more optimised." src="/blog/assets/img/F2S_loopify_figure.png"/>
</a>
<div class="caption">
Two camels are taking a break from crossing the desert, they know their path could not have been more optimised.
</div>
</p>
</div>
</p>
<h3>Welcome to a new episode of <strong>The Flambda2 Snippets</strong>!</h3>
<p>Today's topic is <code>Loopify</code>, one of <code>Flambda2</code>'s many optimisation algorithms
which specifically deals with optimising both <em>purely tail-recursive</em> and/or
functions <em>annotated</em> with the <code>[@@loop]</code> attribute in OCaml.</p>
<p>A lazy explanation for its utility would be to say that it simply aims at
reducing the number of memory allocations in the context of <em>recursive</em> and
<em>tail-recursive</em> function calls in OCaml. However, we will see that is just
<strong>part</strong> of the point and thus we will tend to address the broader context:
what are <em>tail-calls</em>, how they are optimised and how they fit in the
functional programming world, what dilemma does <code>Loopify</code> nullify exactly and,
in time, many details on how it's all implemented!</p>
<p>If you happen to be stumbling upon this article and wish to get a bird's-eye
view of the entire <strong>F2S</strong> series, be sure to refer to <a href="/blog/2024_03_18_the_flambda2_snippets_0">Episode
0</a> which does a good amount of
contextualising as well as summarising of, and pointing to, all subsequent
episodes.</p>
<p><strong>All feedback is welcome, thank you for staying tuned and happy reading!</strong></p>
<blockquote>
<p>The <strong>F2S</strong> blog posts aim at gradually introducing the world to the
inner-workings of a complex piece of software engineering: The <code>Flambda2 Optimising Compiler</code>, a technical marvel born from a 10 year-long effort in
Research & Development and Compilation; with many more years of expertise in
all aspects of Computer Science and Formal Methods.</p>
</blockquote>
<p><div id="tableofcontents">
<strong>Table of contents</strong></p>
<ul>
<li><a href="#tco">Tail-Call Optimisation</a>
</li>
<li><a href="#tailcallsinocaml">Tail-Calls in OCaml</a>
</li>
<li><a href="#conundrum">The Conundrum of Reducing allocations Versus Writing Clean Code</a>
</li>
<li><a href="#loopify">Loopify</a>
<ul>
<li><a href="#concept">Concept</a>
</li>
<li><a href="#toloopifyornottoloopify">Deciding to Loopify or not</a>
</li>
<li><a href="#thetransformation">The nature of the transformation</a>
</li>
</ul>
</li>
<li><a href="#conclusion">Conclusion</a>
</div>
</li>
</ul>
<h2>
<a id="tco" class="anchor"></a><a class="anchor-link" href="#tco">Tail-Call Optimisation</a>
</h2>
<p>As far as we know, Tail-Call optimisation (TCO) has been a reality since at
least the 70s. Some LISP implementations used it and Scheme specified it into
its language around 1975.</p>
<p>The debate to support TCO happens regularly today still. Nowadays, it's a given
that most functional languages support it (Scala, OCaml, Haskell, Scheme and so
on...). Other languages and compilers have supported it for some time too.
Either optionally, with some C compilers (gcc and clang) that support TCO in
some specific compilation scenarios; or systematically, like Lua, which, despite
not usually being considered a functional language, specifies that TCO occurs
whenever possible (<a href="https://www.lua.org/manual/5.3/manual.html#3.4.10">you may want to read section 3.4.10 of the Lua manual
here</a>).</p>
<p><strong>So what exactly is Tail-Call Optimisation ?</strong></p>
<p>A place to start would be the <a href="https://en.wikipedia.org/wiki/Tail-call_optimisation">Wikipedia
page</a>. You may also find
some precious insight about the link between the semantics of <code>GOTO</code> and tail
calls <a href="https://www.college-de-france.fr/fr/agenda/cours/structures-de-controle-de-goto-aux-effets-algebriques/programmer-ses-structures-de-controle-continuations-et-operateurs-de-controle">here</a>,
a course from Xavier Leroy at the <em>College de France</em>, which is in French.</p>
<p>Additionally to these resources, here are images to help you visualise how TCO
improves stack memory consumption. Assume that <code>g</code> is a recursive function
called from <code>f</code>:</p>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/F2S_2_stack_no_tail_rec_call.svg">
<img alt="A representation of the textbook behaviour for recursive functions stackframe allocations. You can see here that the stackframes of non-tail-recursive functions are allocated sequentially on decreasing memory addresses which may eventually lead to a stack overflow." src="/blog/assets/img/F2S_2_stack_no_tail_rec_call.svg"/>
</a>
<div class="caption">
A representation of the textbook behaviour for recursive functions stackframe allocations. You can see here that the stackframes of non-tail-recursive functions are allocated sequentially on decreasing memory addresses which may eventually lead to a stack overflow.
</div>
</p>
</div>
</p>
<p>Now, let's consider a tail-recursive implementation of the <code>g</code> function in a
context where TCO is <strong>not</strong> supported. Tail-recursion means that the last
thing <code>t_rec_g</code> does before returning is calling itself. The key is that we
still have a frame for the caller version of <code>t_rec_g</code> but we know that it will
only be used to return to the parent. The frame itself no longer holds any
relevant information besides the return address and thus the corresponding memory
space is therefore mostly wasted.</p>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/F2S_2_stack_tail_rec_call_no_tco.svg">
<img alt="A representation of the textbook behaviour for tail-recursive functions stackframe allocations without Tail Call Optimisation (TCO). When TCO is not implemented the behaviour for these allocations and the potential for a stack overflow are the same as with non-tail-recursive functions." src="/blog/assets/img/F2S_2_stack_tail_rec_call_no_tco.svg"/>
</a>
<div class="caption">
A representation of the textbook behaviour for tail-recursive functions stackframe allocations without Tail Call Optimisation (TCO). When TCO is not implemented the behaviour for these allocations and the potential for a stack overflow are the same as with non-tail-recursive functions.
</div>
</p>
</div>
</p>
<p>And finally, let us look at the same function in a context where TCO <strong>is</strong>
supported. It is now apparent that memory consumption is much improved by the
fact that we reuse the space from the previous stackframe to allocate the next
one all the while preserving its return address:</p>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/F2S_2_stack_tail_rec_call_tco.svg">
<img alt="A representation of the textbook behaviour for tail-recursive functions stackframe allocations with TCO. Since TCO is implemented, we can see that the stack memory consumption is now constant, and that the potential that this specific tail-recursive function will lead to a stack overflow is diminished." src="/blog/assets/img/F2S_2_stack_tail_rec_call_tco.svg"/>
</a>
<div class="caption">
A representation of the textbook behaviour for tail-recursive functions stackframe allocations with TCO. Since TCO is implemented, we can see that the stack memory consumption is now constant, and that the potential that this specific tail-recursive function will lead to a stack overflow is diminished.
</div>
</p>
</div>
</p>
<h3>
<a id="tailcallsinocaml" class="anchor"></a><a class="anchor-link" href="#tailcallsinocaml">Tail-Calls in OCaml</a>
</h3>
<p>The <code>List</code> data structure is fundamental to and ubiquitous in functional
programming. Therefore, it's important to not have an arbitrary limit on the
size of lists that one can manipulate. Indeed, most <code>List</code> manipulation functions
are naturally expressed as recursive functions, and can most of the time be
implemented as tail-recursive functions. Without guaranteed TCO, a programmer
could not have the assurance that their program would not stack overflow at
some point. That reasoning also applies to a lot of other recursive data
structures that commonly occur in programs or libraries.</p>
<p>In OCaml, TCO is guaranteed. Ever since its inception, Cameleers have
unanimously agreed to guarantee the optimisation of tail-calls.
While the compiler's support for TCO has been a thing from the beginning,
<a href="https://v2.ocaml.org/manual/attributes.html#ss%3Abuiltin-attributes">an attribute</a>,
<code>[@tailcall]</code> was later added to help users ensure that their calls are in tail
position.</p>
<p>Recently, TCO was also extended with the <a href="https://v2.ocaml.org/manual/tail_mod_cons.html"><code>Tail Mod Cons</code>
optimisation</a> which allows to
generate tail-calls in more cases.</p>
<h3>
<a id="conundrum" class="anchor"></a><a class="anchor-link" href="#conundrum">The Conundrum of Reducing Allocations Versus Writing Clean Code</a>
</h3>
<p>One would find one of the main purposes for the existence of <code>Loopify</code> in the
following conversation: a Discuss Post about <a href="https://discuss.ocaml.org/t/how-to-speed-up-this-function/10286">the unboxing of floating-point
values in
OCaml</a> and
performance.</p>
<p><a href="https://discuss.ocaml.org/t/how-to-speed-up-this-function/10286/9">This specific
comment</a>
sparks a secondary conversation that you may want to read yourself but will
find a quick breakdown of below and that will be a nice starting point to
understand today's subject.</p>
<p>Consider the following code:</p>
<pre><code class="language-ocaml">let sum l =
let rec loop s l =
match l with
| [] -> s
| hd :: tl ->
(* This allocates a boxed float *)
let s = s +. hd in
loop s tl
in
loop 0. l
</code></pre>
<p>This is a simple tail-recursive implementation of a <code>sum</code> function for a list of
floating-point numbers. However this is not as efficient as we would like it to
be.</p>
<p>Indeed, OCaml needs an uniform representation of its values in order to
implement polymorphic functions. In the case of floating-point numbers this
means that the numbers are boxed whenever they need to be used as generic
values.</p>
<p>Besides, everytime we call a function all parameters have to be considered as
generic values. We thus cannot avoid their allocation at each recursive call in
this function.</p>
<p>If we were to optimise it in order to get every last bit of performance out of
it, we could try something like:</p>
<p><strong>Warning: The following was coded by trained professionnals, do NOT try this at home.</strong></p>
<pre><code class="language-ocaml">let sum l =
(* Local references *)
let s = ref 0. in
let cur = ref l in
try
while true do
match !cur with
| [] -> raise Exit
| hd :: tl ->
(* Unboxed floats -> No allocation *)
s := !s +. hd;
cur := tl
done; assert false
with Exit -> !s (* The only allocation *)
</code></pre>
<p>While in general references introduce one allocation and a layer of indirection,
when the compiler can prove that a reference is strictly local to a given function
it will use mutable variables instead of reference cells.</p>
<p>In our case <code>s</code> and <code>cur</code> do not escape the function and are therefore eligible
to this optimisation.</p>
<p>After this optimisation, <code>s</code> is now a mutable variable of type <code>float</code> and so it
can also trigger another optimisation: <em>float unboxing</em>.</p>
<p>You can see more details
<a href="https://www.lexifi.com/blog/ocaml/unboxed-floats-ocaml/#">here</a> but note that,
in this specific example, all occurrences of boxing operations disappear except
a single one at the end of the function.</p>
<p><strong>We like to think that not forcing the user to write such code is a benefit, to
say the least.</strong></p>
<hr />
<h2>
<a id="loopify" class="anchor"></a><a class="anchor-link" href="#loopify">Loopify</a>
</h2>
<h3>
<a id="concept" class="anchor"></a><a class="anchor-link" href="#concept">Concept</a>
</h3>
<p>There is a general concept of transforming function-level control-flow into
direct <strong>IR</strong> continuations to benefit from "basic block-level" optimisations. One
such pattern is present in the local-function optimisation triggered by the
<code>[@local]</code> attribute. <a href="https://github.com/ocaml/ocaml/pull/2143">Here's the link to the PR that implements
it</a>. <code>Loopify</code> is an attempt to
extend the range of this kind of optimisation to proper (meaning <code>self</code>)
tail-recursive calls.</p>
<p>As you saw previously, in some cases (e.g.: numerical calculus), recursive
functions sometimes hurt performances because they introduce some allocations.</p>
<p>That lost performance can be recovered by hand-writing loops using local
references however it's unfortunate to encourage non-functional code in a
language such as OCaml.</p>
<p>One of <code>Flambda</code> and <code>Flambda2</code>'s goals is to avoid situations such as those and
allow for good-looking, functional code, to be as performant as code which is
written and optimised by hand at the user-level.</p>
<p>Therefore, we introduce a solution to the specific problem described above with
<code>Loopify</code>, which, in a nutshell, transforms tail-recursive functions into
non-recursive functions containing a loop, hence the name.</p>
<h3>
<a id="toloopifyornottoloopify" class="anchor"></a><a class="anchor-link" href="#toloopifyornottoloopify">Deciding to Loopify or not</a>
</h3>
<p>The decision to loopify a given function is made during the conversion from the
<code>Lambda</code> <strong>IR</strong> to the <code>Flambda2</code> <strong>IR</strong>. The conversion is triggered in two cases:</p>
<ul>
<li>when a function is purely tail-recursive -- meaning all its uses within its
body are <code>self-tail</code> calls, they are called <em>proper calls</em>;
</li>
<li>when an annotation is given by the user in the source code using the
<code>[@loop]</code> attribute;
</li>
</ul>
<p>Let's see two examples for them:</p>
<pre><code class="language-ocaml">(* Not a tail-rec function: is not loopified *)
let rec map f = function
| [] -> []
| x :: r -> f x :: map f r
(* Is tail-rec: is loopified *)
let rec fold_left f acc = function
| [] -> acc
| x :: r -> fold_left f (f acc x) r
</code></pre>
<p>Here, the decision to <code>loopify</code> is automatic and requires no input from the
user. Quite straightforward.</p>
<hr />
<p>Onto the second case now:</p>
<pre><code class="language-ocaml">(* Helper function, not recursive, nothing to do. *)
let log dbg f arg =
if dbg then
print_endline "Logging...";
f arg
[@@inline]
(*
Not tail-rec in the source, but may become
tail-rec after inlining of the [log] function.
At this point we can loopify, provided that the
user specified a [@@loop] attribute.
*)
let rec iter_with_log dbg f = function
| [] -> ()
| x :: r ->
f x;
log dbg (iter_with_log dbg f) r
[@@loop]
</code></pre>
<p>The recursive function <code>iter_with_log</code>, is not initially purely tail-recursive.</p>
<p>However after the inlining of the <code>log</code> function and then simplification, the new
code for <code>iter_with_log</code> becomes purely tail-recursive.</p>
<p>At that point we have the ability to <code>loopify</code> the function, but we keep from
doing so unless the user specifies the <code>[@@loop]</code> attribute on the function definition.</p>
<h3>
<a id="thetransformation" class="anchor"></a><a class="anchor-link" href="#thetransformation">The nature of the transformation</a>
</h3>
<p>Onto the details of the transformation.</p>
<p>First, we introduce a recursive continuation at the start of the function. Lets
call it <code>self</code>.</p>
<p>Then, at each tail-recursive call, we replace the function call with a
continuation call to <code>self</code> with the same arguments as the original call.</p>
<pre><code class="language-ocaml">let rec iter_with_log dbg f l =
let_cont rec k_self dbg f l =
match l with
| [] -> ()
| x :: r ->
f x;
log dbg (iter_with_log dbg f) r
in
apply_cont k_self (dbg, f, l)
</code></pre>
<p>Then, we inline the <code>log</code> function:</p>
<pre><code class="language-ocaml">let rec iter_with_log dbg f l =
let_cont k_self dbg f l =
match l with
| [] -> ()
| x :: r ->
f x;
(* Here the inlined code starts *)
(*
We first start by binding the arguments of the
original call to the parameters of the function's code
*)
let dbg = dbg in
let f = iter_with_log dbg f in
let arg = r in
if dbg then
print_endline "Logging...";
f arg
in
apply_cont k_self (dbg, f, l)
</code></pre>
<p>Then, we discover a <em>proper</em> tail-recursive call subsequently to these
transformations that we replace with the adequate continuation call.</p>
<pre><code class="language-ocaml">let rec iter_with_log dbg f l =
let_cont k_self dbg f l =
match l with
| [] -> ()
| x :: r ->
f x;
(* Here the inlined code starts *)
(*
Here, the let bindings have been substituted
by the simplification.
*)
if dbg then
print_endline "Logging...";
apply_cont k_self (dbg, f, r)
in
apply_cont k_self (dbg, f, l)
</code></pre>
<p>In this context, the benefit of transforming a function call to a continuation
call is mainly about allowing other optimisations to take place. As shown
in the previous section, one of these optimisations is <code>unboxing</code> which can be
important in some cases like numerical calculus. Such optimisations can take
place because continuations are local to a function while OCaml ABI-abiding
function calls require a prior global analysis.</p>
<p>One could think that a continuation call is intrinsically cheaper than a
function call. However, the OCaml compiler already optimises self-tail-calls
such that they are already as cheap as continuation calls (i.e, a single <code>jump</code>
instruction).</p>
<p>An astute reader could realise that this transformation can apply to any
function and will result in one of three outcomes:</p>
<ul>
<li>if the function is not tail-recursive, or even not recursive at all, nothing
will happen, the transformation does nothing.
</li>
<li>if a function is purely tail-recursive then all recursive calls will be
replaced to a continuation call and the function after optimisation will no
longer be recursive. This allows us to later inline it and even specialise
some of its arguments. This happens precisely when we automatically decide to
loopify a function;
</li>
<li>if a function is not purely tail-recursive, but contains some tail-recursive
calls then the transformation will rewrite those calls but not the other
ones. This may result in better code but it's hard to be sure in advance. In
such cases (and cases where functions become purely tail-recursive only after
<code>inlining</code>), users can force the transformation by using the <code>[@@loop]</code>
attribute
</li>
</ul>
<h2>
<a id="conclusion" class="anchor"></a><a class="anchor-link" href="#conclusion">Conclusion</a>
</h2>
<p>Here it is, the concept behind the <code>Loopify</code> optimisation pass as well as the
general context and philosophy which led to its inception!</p>
<p>It should be clear enough now that having to choose between writing clean <strong>or</strong>
efficient code was always unsatisfactory to us. With <code>Loopify</code>, as well as with
the rest of the <code>Flambda</code> and <code>Flambda2</code> compiler backends, we aim at making
sure that users should <strong>not</strong> have to write imperative code for it to be as
efficient as functional code. Thus ideally making any which way of writing a
piece of code as efficient as the next.</p>
<p>This article describes one of the very first user-facing optimisations of this
series of snippets on <code>Flambda2</code>. We have not gotten into any of the neat
implementation details yet. This is a topic for another time. The functioning
of <code>Loopify</code> will be much clearer next time we talk about it.</p>
<p><code>Loopify</code> is only applied automatically when the tail-recursive nature of a
function call is visible in the source from the get-go. However, the
optimisations applied by <code>Loopify</code> can still very much be useful in other
situations as seen in <a href="#toloopifyornottoloopify">this section</a>. That is why we
have the <code>[@loop]</code> attribute in order to enforce <em>loopification</em>. Good canonical
examples for applying <code>Loopify</code> with the <code>[@loop]</code> attribute would be either of
the following: loopifying a partially tail-recursive function (i.e, a function
with only <em>some</em> tail-recursive paths), or for functions which are not
obviously tail-recursive in the source code, but could become so after some
optimisation steps.</p>
<p>This transformation illustrates a core principle behind the <code>Flambda2</code> design:
applying a somewhat naïve optimisation that is not transformative by itself,
but changes the way the compiler can look at the code and trigger a whole lot
of other useful ones. Conversely, it being triggered in the middle of the
inlining phase can allow some non-obvious cases to become radically better.
Coding a single optimisation that would discover the cases demonstrated in the
examples above would be quite complex, while this one is rather simple thanks
to these principles.</p>
<p>Throughout the entire series of snippets, we will continue seeing these
principles in action, starting with the next blog post that will introduce
<code>Downward and Upward Traversals</code>.</p>
<p><strong>Stay tuned, and thank you for reading, until next time, <em>see you Space
Cowboy</em>. <a href="https://fr.wikipedia.org/wiki/Cowboy_Bebop">🤠</a></strong></p>
Fixing and Optimizing the GnuCOBOL Preprocessorhttps://ocamlpro.com/blog/2024_04_30_fixing_and_optimizing_gnucobol2024-04-30T08:12:13Z2024-04-30T08:12:13Z
Fabrice Le Fessant
In this post, I will present some work that we did on the GnuCOBOL compiler, the only fully-mature open-source compiler for COBOL. It all started with a bug issued by one of our customers that we fixed by improving the preprocessing pass of the compiler. We later went on and optimised it to get bett...<p></p>
<p>In this post, I will present some work that we did on the GnuCOBOL
compiler, the only fully-mature open-source compiler for COBOL. It all started
with a bug issued by one of our customers that we fixed by
improving the preprocessing pass of the compiler. We later went on and
optimised it to get better performances than the initial version.</p>
<blockquote>
<p>Supporting the GnuCOBOL compiler has become one of our commercial
activities. If you are interested in this project, we have a
dedicated website on our <a href="https://get-superbol.com">SuperBOL offer</a>, a
set of tools and services to ease deploying GnuCOBOL in a company to
replace proprietary COBOL environments.</p>
</blockquote>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/craiyon-gnucobol-optimization.webp">
<img alt="At
OCamlPro, we often favor correctness over performance. But at the
end, our software is correct AND often faster than its competitors!
Optimizing software is an art, that often contradicts popular
beliefs." src="/blog/assets/img/craiyon-gnucobol-optimization.webp"/>
</a>
<div class="caption">
At
OCamlPro, we often favor correctness over performance. But at the
end, our software is correct AND often faster than its competitors!
Optimizing software is an art, that often contradicts popular
beliefs.
</div>
</p>
</div>
</p>
<p><div id="tableofcontents">
<strong>Table of contents</strong></p>
<ul>
<li><a href="#replacing">Preprocessing and Replacements in COBOL</a>
</li>
<li><a href="#gnucobol">Preprocessing in the GnuCOBOL Compiler</a>
</li>
<li><a href="#standard">Conformance to the ISO Standard</a>
</li>
<li><a href="#automata">Preprocessing with Automata on Streams</a>
</li>
<li><a href="#issues">Some Performance Issues</a>
</li>
<li><a href="#allocations">Optimising Allocations</a>
</li>
<li><a href="#fastpaths">What about Fast Paths ?</a>
</li>
<li><a href="#conclusion">Conclusion</a>
</div>
</li>
</ul>
<h2>
<a id="replacing" class="anchor"></a><a class="anchor-link" href="#replacing">Preprocessing and Replacements in COBOL</a>
</h2>
<p>COBOL was born in 1959, at a time where the science of programming
languages was just starting. If you had to design a new language for
the same purpose today, the result would be very different, you would
do different mistakes, but maybe not fewer. Actually, COBOL has shown
to be particularly resilient to time, as it is still used, 70
years later! Though it has evolved over the years (the <a href="https://www.iso.org/fr/standard/74527.html">last ISO
standard for COBOL</a> was
released in January 2023), the kernel of the language is still the
same, showing that most of the initial design choices were not perfect, but
still got the job done.</p>
<p>One of these choices, which would sure scare off young developers, is how COBOL
favors code reusability and sharing, through replacements done
in its preprocessor.</p>
<p>Let's consider this COBOL code, this will be our example for the rest of this
article:</p>
<pre><code class="language-COBOL">DATA DIVISION.
WORKING-STORAGE SECTION.
01 VAL1.
COPY MY-RECORD REPLACING ==:XXX:== BY ==VAL1==.
01 VAL2.
COPY MY-RECORD REPLACING ==:XXX:== BY ==VAL2==.
01 COUNTERS.
05 COUNTER-NAMES PIC 999 VALUE 0.
05 COUNTER-VALUES PIC 999 VALUE 0.
</code></pre>
<p>We are using the <em>free</em> format, a modern way of formatting code, the
older <em>fixed</em> format would require to leave a margin of 7 characters
on the left. We are in the <code>DATA</code> division, the part of the program
that defines the format of data, and specifically, in the
<code>WORKING-STORAGE</code> section, where global variables are defined. In
<em>standard</em> COBOL, there are no local variables, so the
<code>WORKING-STORAGE</code> section usually contains all the variables of the
program, even temporary ones.</p>
<p>In COBOL, there are variables of basic types (integers and strings
with specific lengths), and composite types (arrays and
records). Records are defined using levels: global variables are at
level <code>01</code> (such as <code>VAL1</code>, <code>VAL2</code> and <code>COUNTERS</code> in our example),
whereas most other levels indicate inner fields: here,
<code>COUNTER-NAMES</code> and <code>COUNTER-VALUES</code> are two direct fields of
<code>COUNTERS</code>, as shown by their lower level <code>05</code> (both are actually
integers of 3 digits as specified by <code>PIC 999</code>). Moreover, COBOL
programmers like to be able to access fields directly, by making them
unique in the program: it is thus possible to use <code>COUNTER-NAMES</code>
everywhere in the program, without refering to <code>COUNTERS</code> itself
(note that if the field wasn't assigned a unique name, it would be possible
to use <code>COUNTER-NAMES OF COUNTERS</code> to disambiguate them).</p>
<p>On the other hand, in older versions of COBOL, there were no type definitions.</p>
<p><strong>So how would one create two record variables with the same content?</strong></p>
<p>One
would use the preprocessor to include the same file several times,
describing the structure of the record into your program. One would
also use that same file to describe the format of some
data files storing such records. Actually, COBOL developers use
external tools that are used to manage data files and generate the
descriptions, that are then included into COBOL programs in order to manipulate the files
(<code>pacbase</code> for example is one such tool).</p>
<p>In our example, there would be a file <code>MY-RECORD.CPY</code> (usually called
a <em>copybook</em>), containing something like the following somewhere in the
filesystem:</p>
<pre><code class="language-COBOL">05 :XXX:-USERNAME PIC X(30).
05 :XXX:-BIRTHDATE.
10 :XXX:-BIRTHDATE-YEAR PIC 9999.
10 :XXX:-BIRTHDATE-MONTH PIC 99.
10 :XXX:-BIRTHDATE-MDAY PIC 99.
05 :XXX:-ADDRESS PIC X(100).
</code></pre>
<p>This code except is actually not really correct COBOL code because
identifiers cannot contain a <code>:XXX:</code> part:. It was written
instead for it to be included <strong>and modified</strong> in other COBOL programs.</p>
<p>Indeed, the following line will include the file and perform a replacement of a
<code>:XXX:</code> partial token by <code>VAL1</code>:</p>
<pre><code class="language-COBOL">COPY MY-RECORD REPLACING ==:XXX:== BY ==VAL1==.
</code></pre>
<p>So, in our main example, we now have two global record variables
<code>VAL1</code> and <code>VAL2</code>, of the same format, but containing fields with
unique names such as <code>VAL1-USERNAME</code> and <code>VAL2-USERNAME</code>.</p>
<p>Allow me to repeat that, despite pecular nature, these features <strong>have</strong> stood
the test of the time.</p>
<p>The journey continues. Suppose now that you are in a specific part
of your program, and that wish to manipulate longer names, say, you
would like the <code>:XXX:-USERNAME</code> variable to be of size <code>60</code> instead of <code>30</code>.</p>
<p>Here is how you could do it:</p>
<pre><code class="language-COBOL"> [...]
REPLACE ==PIC X(30)== BY ==PIC X(60)==.
01 VAL1.
COPY [...]
REPLACE OFF.
01 COUNTERS.
[...]
</code></pre>
<p>Here, we can replace a list of consecutive tokens <code>PIC X(30)</code> by
another list of tokens <code>PIC X(60)</code>. The result is that the fields
<code>VAL1-USERNAME</code> and <code>VAL2-USERNAME</code> are now <code>60</code> bytes long.</p>
<p><code>REPLACE</code> and <code>COPY REPLACING</code> can both perform the same kind of
replacements on both parts of tokens (using <code>LEADING</code> or <code>TRAILING</code>
keywords) and lists of tokens. COBOL programmers combine them to
perform their daily job of building consistent software, by sharing
formats using shared copybooks.</p>
<p>Let's see now how GnuCOBOL can deal with that.</p>
<h2>
<a id="gnucobol" class="anchor"></a><a class="anchor-link" href="#gnucobol">Preprocessing in the GnuCOBOL Compiler</a>
</h2>
<p>The GnuCOBOL compiler is a transpiler: it translates COBOL source code
into C89 source code, that can then be compiled to executable code by
a C compiler. It has two main benefits: <strong>high portability</strong>, as
GnuCOBOL will work on any platform with any C compiler, including very
old hardware and mainframes, and <strong>simplicity</strong>, as code generation is
reduced to its minimum, most of the code of the compiler is its
parser... Which is actually still huge as COBOL is a particularly rich
language.</p>
<p>GnuCOBOL implements many dialects, (i.e.: extensions of COBOL available on
proprietary compilers such as IBM, MicroFocus, etc.), in order to provide a
solution to the migration issues posed by proprietary platforms.</p>
<blockquote>
<p>The support of dialects is one of the most interesting features of
GnuCOBOL: by supporting natively many extensions of proprietary
compilers, it is possible to migrate applications from these
compilers to GnuCOBOL without modifying the sources, allowing to run
the same code on the old platform and the new one during all the
migration.</p>
<p>One of OCamlPro's main contributions to GnuCOBOL has been
to create such a dialect for GCOS7, a former Bull mainframe still in
use in some places.</p>
</blockquote>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/bull-dps7-gcos7.jpg">
<img alt="This is a Bull DPS-7
mainframe around 1980, running the GCOS7 operating system. Such
systems are still used to run COBOL critical applications in some
companies, though running on software emulators on PCs. GnuCOBOL is a
mature solution to migrate such applications to modern Linux
computers." src="/blog/assets/img/bull-dps7-gcos7.jpg"/>
</a>
<div class="caption">
This is a Bull DPS-7
mainframe around 1980, running the GCOS7 operating system. Such
systems are still used to run COBOL critical applications in some
companies, though running on software emulators on PCs. GnuCOBOL is a
mature solution to migrate such applications to modern Linux
computers.
</div>
</p>
</div>
</p>
<p>To perform its duty, GnuCOBOL processes COBOL source files in two
passes: it preprocesses them during the first phase, generating a new
temporary COBOL file with all inclusions and replacement done, and
then parses this file and generates the corresponding C code.</p>
<p>To do that, GnuCOBOL includes two pairs of lexers and parsers, one for each
phase. The first pair only recognises a very limited set of constructions, such
as <code>COPY... REPLACING</code>, <code>REPLACE</code>, but also some
other ones like compiler directives.</p>
<p>The lexer/parser for preprocessing directly works on the input file,
and performed all these operations in a single pass before version
<code>3.2</code>.</p>
<p>The output can be seen using the <code>-E</code> argument:</p>
<pre><code class="language-shell">$ cobc -E --free foo.cob
#line 1 "foo.cob"
DATA DIVISION.
WORKING-STORAGE SECTION.
01 VAL1.
#line 1 "MY-RECORD.CPY"
05 VAL1-USERNAME PIC X(60).
05 VAL1-BIRTHDATE.
10 VAL1-BIRTHDATE-YEAR PIC 9999.
10 VAL1-BIRTHDATE-MONTH PIC 99.
10 VAL1-BIRTHDATE-MDAY PIC 99.
05 VAL1-ADDRESS PIC X(100).
#line 5 "foo.cob"
01 VAL2.
#line 1 "MY-RECORD.CPY"
05 VAL2-USERNAME PIC X(60).
05 VAL2-BIRTHDATE.
10 VAL2-BIRTHDATE-YEAR PIC 9999.
10 VAL2-BIRTHDATE-MONTH PIC 99.
10 VAL2-BIRTHDATE-MDAY PIC 99.
05 VAL2-ADDRESS PIC X(100).
#line 7 "foo.cob"
01 COUNTERS.
05 COUNTER-NAMES PIC 999 VALUE 0.
05 COUNTER-VALUES PIC 999 VALUE 0.
</code></pre>
<p>The <code>-E</code> option is particularly useful if you want to understand the
final code that GnuCOBOL will compile. You can also get access to this
information using the option <code>--save-temps</code> (save intermediate files),
in which case <code>cobc</code> will generate a file with extension <code>.i</code> (<code>foo.i</code>
in our case) containing the preprocessed COBOL code.</p>
<p>You can see that <code>cobc</code> successfully performed both the <code>REPLACE</code> and
<code>COPY REPLACING</code> instructions.</p>
<p>The <a href="https://github.com/OCamlPro/gnucobol/blob/5ab722e656a25dc95ab99705ee1063562f2e5be5/cobc/pplex.l#L2049">corresponding code in version
3.1.2</a>
is in file <code>cobc/pplex.l</code>, function <code>ppecho</code>. Fully understanding it
is left as an exercice for the motivated reader.</p>
<p>The general idea is that replacements defined by <code>COPY REPLACING</code> and <code>REPLACE</code>
are added to the same list of active replacements.</p>
<p>We show in the next section that such an implementation does not
conform to the ISO standard.</p>
<h2>
<a id="standard" class="anchor"></a><a class="anchor-link" href="#standard">Conformance to the ISO Standard</a>
</h2>
<p>You may wonder if it is possible for <code>REPLACE</code> statements to perform
replacements that would change a <code>COPY</code> statement, such as :</p>
<pre><code class="language-COBOL">REPLACE ==COPY MY-RECORD== BY == COPY OTHER-RECORD==.
COPY MY-RECORD.
</code></pre>
<p>You may also wonder what happens if we try to combine replacements by
<code>COPY</code> and <code>REPLACE</code> on the same tokens, for example:</p>
<pre><code class="language-COBOL">REPLACE ==VAL1-USERNAME PIC X(30)== BY ==VAL1-USERNAME PIC X(60)==
</code></pre>
<p>Such a statement only makes sense if we assume the <code>COPY</code> replacements
have been performed before the <code>REPLACE</code> replacements are performed.</p>
<p>Such ambiguities have been resolved in the ISO Standard for COBOL: in
section <code>7.2.1. Text Manipulation >> General</code>, it is specified that
preprocessing is executed in 4 phases on the streams of tokens:</p>
<pre><code class="language-shell-session">1. `COPY` statements are performed, and the corresponding `REPLACING`
replacements too;
2. Conditional compiler directives are then performed;
3. `REPLACE` statements are performed;
4. `COBOL-WORDS` statements are performed (allowing to enabled/disable
some keywords)
</code></pre>
<p>So, a <code>REPLACE</code> cannot modify a <code>COPY</code> statement (and the opposite is
also impossible, as <code>REPLACE</code> are not allowed in copybooks), but it
can modify the same set of tokens that are being modified by the
<code>REPLACING</code> part of a <code>COPY</code>.</p>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/standard-iso-cobol.jpg">
<img alt="The ISO standard
specifies the different steps to preprocess COBOL files and perform
replacements in a specific order." src="/blog/assets/img/standard-iso-cobol.jpg"/>
</a>
<div class="caption">
The ISO standard
specifies the different steps to preprocess COBOL files and perform
replacements in a specific order.
</div>
</p>
</div>
</p>
<p>As described in the previous section, GnuCOBOL implements all phases
1, 2 and 3 in a single one, even mixing replacements defined by
<code>COPY</code> and by <code>REPLACE</code> statements. Fortunately, this behavior is good
enough for most programs. Unfortunately, there are still programs
that combine <code>COPY</code> and <code>REPLACE</code> on the same tokens, leading to hard
to debug errors, as the compiler does not conform to the
specification.</p>
<p>A difficult situation which happened to one of our customers and that we
prompty addressed by patching a part of the compiler.</p>
<h2>
<a id="automata" class="anchor"></a><a class="anchor-link" href="#automata">Preprocessing with Automata on Streams</a>
</h2>
<p>Correctly implementing the specification written in the standard would
make the preprocessing phase quite complicated. Indeed, we would have
to implement a small parser for every one of the four steps of
preprocessing. That's actually what we did for our <a href="https://github.com/OCamlPro/superbol-studio-oss/tree/master/src/lsp/cobol_preproc">COBOL parser in
OCaml</a>
used by the LSP (<a href="https://microsoft.github.io/language-server-protocol/">Language Server
Protocol</a>) of
our <a href="https://marketplace.visualstudio.com/items?itemName=OCamlPro.SuperBOL">SuperBOL
Studio</a>
COBOL plugin for VSCode.</p>
<p>However, doing the same in GnuCOBOL is much harder: GnuCOBOL is
written in C, and such a change would require a complete rewriting of
the preprocessor, something that would take more time than we
had on our hands. Instead, we opted for rewriting the replacement function, to
split <code>COPY REPLACING</code> and <code>REPLACE</code> into two different replacement phases.</p>
<p>The <a href="https://github.com/OCamlPro/gnucobol/blob/gnucobol-3.2/cobc/replace.c">corresponding C
code</a>
has been moved into a file <code>cobc/replace.c</code>. It implements an
automaton that applies a list of replacements on a stream of tokens,
returning another stream of tokens. The preprocessor is thus composed
of two instances of this automaton, one for <code>COPY REPLACING</code>
statements and another one for <code>REPLACE</code> statements.</p>
<p>The second instance takes the stream of tokens produced by the first one as
input. The automaton is implemented using recursive functions, which is particularly
suitable to allow reasoning about its correctness. Actually, several bugs were
found in the former C implementation while designing this
automaton. Each automaton has an internal state, composed of a set of
tokens which are queued (and waiting for a potential match) and a list
of possible replacements of these tokens.</p>
<p>Thanks to this design, it was possible to provide a working implementation in a
very short delay, considering the complexity of that part of the compiler.</p>
<p>We added several tests to the testsuite of the compiler for all the bugs that
had been detected in the process to prevent regressions in the future, and the
<a href="https://github.com/OCamlPro/gnucobol/pull/75">corresponding pull request</a> was
reviewed by Simon Sobisch, the GnuCOBOL project leader, and later upstreamed.</p>
<h3>
<a id="issues" class="anchor"></a><a class="anchor-link" href="#issues">Some Performance Issues</a>
</h3>
<p>Unfortunately, it was not the end of the work: Simon performed some
performance evaluations on this new implementation, and although it
had improved the conformance of GnuCOBOL to the standard, it did affect the performance negatively.</p>
<p>Compiler performance is not always critical for most applications, as
long as you compile only individual COBOL source files. However, some
source files can become very big, especially when part of the code is
auto-generated. In COBOL, a typical case of that is the use of a
pre-compiler, typically for SQL. Such programs contain <code>EXEC SQL</code>
statements, that are translated by the SQL pre-compiler into much
longer COBOL code, consisting mostly of <code>CALL</code> statements calling C
functions into the SQL library to build and execute SQL requests.</p>
<p>For such a generated program, of a whopping 700 kLines, Simon noticed an important
degradation in compilation time, and profiling tools concluded that
the new preprocessor implementation was responsible for it, as shown
in the flamegraph below:</p>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/cobc-callgraph-pplex1.png">
<img alt="A flamegraph
generated by <code>perf</code> stats visualised on <code>hotspot</code>: the horizontal axis
is the total duration. We can see that <code>ppecho</code>, the function for
replacements, takes most of the preprocessing time, with the
two-automata replacement phases. Credit: Simon Sobisch" src="/blog/assets/img/cobc-callgraph-pplex1.png"/>
</a>
<div class="caption">
A flamegraph
generated by <code>perf</code> stats visualised on <code>hotspot</code>: the horizontal axis
is the total duration. We can see that <code>ppecho</code>, the function for
replacements, takes most of the preprocessing time, with the
two-automata replacement phases. Credit: Simon Sobisch
</div>
</p>
</div>
</p>
<p>So we started investigating to fix the problem in <a href="https://github.com/OCamlPro/gnucobol/pull/142">a new
pull-request</a>.</p>
<h3>
<a id="allocations" class="anchor"></a><a class="anchor-link" href="#allocations">Optimizing Allocations</a>
</h3>
<p>Our first intuition was that the main difference with the previous
implementation came from allocating too many lists in the temporary
state of the two automatons. This intuition was only partially right,
as we will see.</p>
<p>Mutable lists were used in the automaton (and also in the former
implementation) to store a small part of the stream of tokens, while
they were being matched with a replacement source. On a partial match,
the list had to wait for additionnal tokens to check for a full
match. Actually, these <strong>lists</strong> were used as <strong>queues</strong>, as tokens
were always added at the end, while matched or un-matched tokens were
removed from the top. Also, the size of these lists was bounded by the
maximal replacement that was defined in the code, that would unlikely
be more than a few dozen tokens.</p>
<p>Our first idea was to replace these lists by real queues, that can be
efficiently implemented using <a href="https://github.com/OCamlPro/gnucobol/blob/82100d64de35c89ad5980d1b2c8d1ffdd3563570/cobc/replace.c#L89">circular buffers and
arrays</a>.
Each and every allocation of a new list element would then be replaced by the
single allocation of a circular buffer, granted with a few possible
reallocations further down the road if the list of replacements was to grow
bigger.</p>
<p>The results were a bit disappointing: on the flamegraph, there was
some improvement, but the replacement phase still took a lot of time:</p>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/cobc-callgraph-pplex2.png">
<img alt="The flamegraph is
better, as shown by the disappearance of calls to <code>token_list_add</code>. But our work is not yet finished! Credit: Simon Sobisch" src="/blog/assets/img/cobc-callgraph-pplex2.png"/>
</a>
<div class="caption">
The flamegraph is
better, as shown by the disappearance of calls to <code>token_list_add</code>. But our work is not yet finished! Credit: Simon Sobisch
</div>
</p>
</div>
</p>
<p>Another intuition we had was that we had been a bit naive
about allocating tokens: in the initial implementation of version
<code>3.1.2</code>, tokens were allocated when copied from the lexer into the
single queue for replacement; in our implementation, that job was also
done, but twice, as they were allocated in both automata. So, we
modified our implementation to only allocate tokens when they are
first entered in the <code>COPY REPLACING</code> stream, and not anymore when
entering the <code>REPLACE</code> stream. A simple idea, that reduced again the
remaining allocations by a factor of 2.</p>
<p>Yet, the new optimised implementation still didn't match the
performance of the former <code>3.1.2</code> version, and we were running out of
ideas on how the allocations performed by the automata could again be
improved:</p>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/cobc-valgrind-pplex1.png">
<img alt="Using circular
buffers instead of mutable lists for queues decreased allocations by a
factor of 3. Removing the re-allocations between the two streams would
also improve it by a factor of 2. A nice improvement, but not yet the
performances of version 3.1.2" src="/blog/assets/img/cobc-valgrind-pplex1.png"/>
</a>
<div class="caption">
Using circular
buffers instead of mutable lists for queues decreased allocations by a
factor of 3. Removing the re-allocations between the two streams would
also improve it by a factor of 2. A nice improvement, but not yet the
performances of version 3.1.2
</div>
</p>
</div>
</p>
<h3>
<a id="fastpaths" class="anchor"></a><a class="anchor-link" href="#fastpaths">What about Fast Paths ?</a>
</h3>
<p>So we decided to study some of the code from <code>3.1.2</code> to understand what
could cause such a difference, and it became immediately obvious: the
former version had two fast paths, that we had left out of our own
implementation!</p>
<p>The two fast paths that completely shortcut the replacement mechanisms are the
following:</p>
<p>The first one is when there are no replacements defined in the source. In
COBOL, most replacements are only performed in the <code>DATA DIVISION</code>, and
moreover, <code>COPY REPLACING</code> ones are only performed during copies. This means
that a large part of the code that did not need to go through our two automata
still did!</p>
<p>The second fast path is for spaces: replacements always start and finish by a
non-space token in COBOL, so, if we check that we are not in the middle of
partial match (i.e. both internal token queues are empty), we can safely make
the space token skip the automata. Again, given the frequency of space tokens
(about half, as there are very few other separators), this fast path is
likely to be used very, very frequently.</p>
<p>Implementing them was straigthforward, and the results were the one
expected:</p>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/cobc-callgraph-pplex3.png">
<img alt="After implementing
the same fast paths as in 3.1.2, the flamegraph is back to normal,
with the time spent in the replacement function being almost not
noticeable. Credit: Simon Sobisch" src="/blog/assets/img/cobc-callgraph-pplex3.png"/>
</a>
<div class="caption">
After implementing
the same fast paths as in 3.1.2, the flamegraph is back to normal,
with the time spent in the replacement function being almost not
noticeable. Credit: Simon Sobisch
</div>
</p>
</div>
</p>
<h3>
<a id="conclusion" class="anchor"></a><a class="anchor-link" href="#conclusion">Conclusion</a>
</h3>
<p>As often with optimisations, intuitions do not always lead to the
expected improvements: in our case, the real improvement came not with
improving the algorithm, but from shortcutting it!</p>
<p>Yet, we are still very pleased by the results: the new optimised
implementation of replacements in GnuCOBOL makes it more conformant to the
standard, and also more efficient than the former <code>3.1.2</code> version, as shown by
the final results sent to us by Simon:</p>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/cobc-valgrind-pplex2.png">
<img alt="These results show that the new implementation is now a little better than 3.1.2. It comes from using the circular buffers instead of the mutable lists for queues, but the optimisation only happens when replacements are defined, which is a very small part of the code source." src="/blog/assets/img/cobc-valgrind-pplex2.png"/>
</a>
<div class="caption">
These results show that the new implementation is now a little better than 3.1.2. It comes from using the circular buffers instead of the mutable lists for queues, but the optimisation only happens when replacements are defined, which is a very small part of the code source.
</div>
</p>
</div>
</p>
OCaml Backtraces on Uncaught Exceptionshttps://ocamlpro.com/blog/2024_04_25_ocaml_backtraces_on_uncaught_exceptions2024-04-25T08:12:13Z2024-04-25T08:12:13Z
Louis Gesbert
Uncaught exception: Not_found This blog post probably won't teach anything new to OCaml veterans; but for the others, you might be glad to learn that this very basic, yet surprisingly little-known feature of OCaml will give you backtraces with source file positions on any uncaught exception. Since i...<p></p>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/AIGEN_camel_catching_butterflies.jpeg">
<img alt="A mystical Camel using its net to catch all uncaught... Butterflies." src="/blog/assets/img/AIGEN_camel_catching_butterflies.jpeg"/>
</a>
<div class="caption">
A mystical Camel using its net to catch all uncaught... Butterflies.
</div>
</p>
</div>
</p>
<h2>
<a id="notfound" class="anchor"></a><a class="anchor-link" href="#notfound">Uncaught exception: Not_found</a>
</h2>
<p>This blog post probably won't teach anything new to OCaml veterans; but for
the others, you might be glad to learn that this very basic, yet surprisingly
little-known feature of OCaml will give you backtraces with source file
positions on any uncaught exception.</p>
<p>Since it can save hours of frustrating debugging, my intent is to give some
publicity to this accidentally hidden feature.</p>
<blockquote>
<p>PSA: define <code>OCAMLRUNPARAM=b</code> in your environment.</p>
</blockquote>
<p>For those wanting to go further, I'll then go on with hints and guidelines for
good exception management in OCaml.</p>
<blockquote>
<p>For the details, everything here is documented in <a href="https://caml.inria.fr/pub/docs/manual-ocaml/libref/Printexc.html">the Printexc
module</a>.</p>
</blockquote>
<p><div id="tableofcontents">
<strong>Table of contents</strong></p>
<ul>
<li><a href="#notfound">Uncaught exception: Not_found</a>
</li>
<li><a href="#getyourstacktraces">Get your stacktraces!</a>
</li>
<li><a href="#improve">Improve your traces</a>
<ul>
<li><a href="#reraising">Properly Re-raising exceptions, and finalisers</a>
</li>
<li><a href="#holes">There are holes in my backtrace!</a>
</li>
</ul>
</li>
<li><a href="#guidelines">Guidelines for exception handling, and Control-C</a>
<ul>
<li><a href="#backtracesinocaml">Controlling the backtraces from OCaml</a>
</div>
</li>
</ul>
</li>
</ul>
<h2>
<a id="getyourstacktraces" class="anchor"></a><a class="anchor-link" href="#getyourstacktraces">Get your stacktraces!</a>
</h2>
<p>Compile-time errors are good, but sometimes you just have to cope with run-time
failures.</p>
<p>Here is a simple (and buggy) program:</p>
<pre><code class="language-ocaml">let dict = [
"foo", "bar";
"foo2", "bar2";
]
let rec replace = function
| [] -> []
| w :: words -> List.assoc w dict :: words
let () =
let words = Array.to_list Sys.argv in
List.iter print_endline (replace words)
</code></pre>
<blockquote>
<p><strong>Side note</strong></p>
<p>For purposes of the example, we use <code>List.assoc</code> here; this relies
on OCaml's structural equality, which is often a bad idea in projects, as it
can break in surprising ways when the matched type gets more complex. A more
serious implementation would use <em>e.g.</em> <code>Map.Make</code> with an explicit comparison
function.</p>
</blockquote>
<p>Here is the result of executing this program with no options:</p>
<pre><code class="language-shell-session">$ ./foo
Fatal error: exception Not_found
</code></pre>
<p>This isn't very helpful, but no need for a debugger, lots of <code>printf</code> or tedious
debugging, just do the following:</p>
<pre><code class="language-shell-session">$ export OCAMLRUNPARAM=b
$ ./foo
Fatal error: exception Not_found
Raised at Stdlib__List.assoc in file "list.ml", line 191, characters 10-25
Called from Foo.replace in file "foo.ml", line 8, characters 18-35
Called from Foo in file "foo.ml", line 12, characters 26-41
</code></pre>
<p>Much more helpful! In most cases, this will be enough to find and fix the bug.</p>
<p>If you still don't get the backtrace, you may need to recompile with <code>-g</code> (with
dune, ensure your default profile is <code>dev</code> or specify <code>--profile=dev</code>)</p>
<p>So, now we know where the failure occured... But not on what input. This is
not a matter of backtraces: if that's an issue, define your own exceptions,
with arguments, and raise that rather than the basic <code>Not_found</code>.</p>
<blockquote>
<p><strong>Hint</strong></p>
<p>If you run the program directly from your editor, with a properly
configured OCaml mode, the file positions in the backtrace should be parsed
and become clickable, making navigation very quick and easy.</p>
</blockquote>
<h2>
<a id="improve" class="anchor"></a><a class="anchor-link" href="#improve">Improve your traces</a>
</h2>
<p>The above works well in general, but depending on the complexity of the
programs, there are some more advanced tricks that may be helpful, to preserve
or improve the backtraces.</p>
<h3>
<a id="reraising" class="anchor"></a><a class="anchor-link" href="#reraising">Properly Re-raising exceptions, and finalisers</a>
</h3>
<p>It's pretty common to want a finaliser after some processing, here to remove a
temporary file:</p>
<pre><code class="language-ocaml">let with_temp_file basename (f: unit -> 'a) : 'a =
let filename = Filename.temp_file basename in
match f filename with
| result ->
Sys.remove filename;
result
| exception e ->
Sys.remove filename;
raise e
</code></pre>
<p>In simple cases this will work, but if <em>e.g.</em> you are using the <code>Printf</code> module
before re-raising, it will break the printed backtrace.</p>
<ul>
<li>
<p><strong>Solution 1</strong>: use <code>Fun.protect ~finally f</code> that handles the backtrace
properly.</p>
</li>
<li>
<p><strong>Solution 2</strong>: manually, use raw backtrace access from the <code>Printexc</code> module:</p>
<pre><code class="language-ocaml">| exception e ->
let bt = Printexc.get_raw_backtrace () in
Sys.remove filename;
Printexc.raise_with_backtrace e bt
</code></pre>
</li>
</ul>
<p>Re-raising exceptions after catching them should always be done in this way.</p>
<h3>
<a id="holes" class="anchor"></a><a class="anchor-link" href="#holes">There are holes in my backtrace!</a>
</h3>
<p>Indeed, it may appear that not all function calls show up in the backtrace.</p>
<p>There are two main reasons for that:</p>
<ul>
<li>functions can get inlined by the compiler, so they don't actually appear in
the concrete backtrace at runtime;
</li>
<li>tail-call optimisation also affects the stack, which can be visible here;
</li>
</ul>
<p>Don't run and disable all optimisations though! Some effort has been put in
recording useful debugging information even in these cases. The <a href="https://ocamlpro.com/blog/2024_03_18_the_flambda2_snippets_0/">Flambda pass
of the compiler</a>, which
does <strong>more</strong> inlining, also actually makes it <strong>more</strong> traceable.</p>
<p>As a consequence, switching to Flambda will often give you more helpful
backtraces with recursive functions and tail-calls. It can be done with <code>opam install ocaml-option-flambda</code> (this will recompile the whole opam switch).</p>
<blockquote>
<p><strong>Well, what if my program uses <code>lwt</code>?</strong></p>
<p>Backtraces in this context are a complex matter -- but they can be simulated:
a good practice is to use <code>ppx_lwt</code> and the <code>let%lwt</code> syntax rather than
<code>let*</code> or <code>Lwt.bind</code> directly, because the ppx will insert calls that
reconstruct "fake" backtrace information.</p>
</blockquote>
<h2>
<a id="guidelines" class="anchor"></a><a class="anchor-link" href="#guidelines">Guidelines for exception handling, and Control-C</a>
</h2>
<p>Exceptions in OCaml can happen anywhere in the program: besides uses of <code>raise</code>,
system errors can trigger them. In particular, if you want to implement clean
termination on the user pressing <code>Control-C</code> without manually handling signals,
you should call <code>Sys.catch_break true</code> ; you will then get a <code>Sys.Break</code>
exception raised when the user interrupts the program.</p>
<p>Anyway, this is one reason why you must never use <code>try .. with _ -></code></p>
<pre><code class="language-ocaml">let find_opt x m =
try Some (Map.find x m)
with _ -> None
</code></pre>
<p>The programmer was too lazy to write <code>with Not_found</code>. They may think this is OK
since <code>Map.find</code> won't raise anything else. But if <code>Control-C</code> is pressed at the
wrong time, this will catch it, and return <code>None</code> instead of stopping the
program !</p>
<pre><code class="language-ocaml">let find_debug x m =
try Map.find x m
with e ->
let bt = Printexc.get_raw_backtrace () in
Printf.eprintf "Error on %s!" (to_string x);
Printexc.raise_with_backtrace e bt
</code></pre>
<p>This version is OK since it re-raises the exception. If you absolutely need to
catch all exceptions, a last resort is to explicitely re-raise "uncatchable"
exceptions:</p>
<pre><code class="language-ocaml">let this_is_a_last_resort =
try .. with
| (Sys.Break | Assert_failure _ | Match_failure _) as e -> raise e
| _ -> ..
</code></pre>
<p>In practice, you'll finally want to catch exceptions from your main function
(<code>cmdliner</code> already offers to do this, for example); catching <code>Sys.Break</code> at
that point will offer a better message than <code>Uncaught exception</code>, give you
control over finalisation and the final exit code (the convention is to use
<code>130</code> for <code>Sys.Break</code>).</p>
<h3>
<a id="backtracesinocaml" class="anchor"></a><a class="anchor-link" href="#backtracesinocaml">Controlling the backtraces from OCaml</a>
</h3>
<p>Setting <code>OCAMLRUNPARAM=b</code> in the environment works from the outside, but the
module <a href="https://caml.inria.fr/pub/docs/manual-ocaml/libref/Printexc.html">Printexc</a>
can also be used to enable or disable them from the OCaml program itself.</p>
<ul>
<li><code>Printexc.record_backtrace: bool -> unit</code> toggles the recording of
backtraces. Forcing it <code>off</code> when running tests, or <code>on</code> when a debug flag is
specified, can be good ideas;
</li>
<li><code>Printexc.backtrace_status: unit -> bool</code> checks if recording is enabled.
This can be used when finalising the program to print the backtraces when
enabled;
</li>
</ul>
<blockquote>
<p><strong>Nota Bene</strong></p>
<p>The <code>base</code> library turns <code>on</code> backtraces recording by default. While I
salute an attempt to remedy the issue that this post aims to address, this can
lead to surprises when just linking the library can change the output of a
program (<em>e.g.</em> this might require specific code for cram tests not to display
backtraces)</p>
</blockquote>
<p>The <code>Printexc</code> module also allows to register custom exception printers: if,
following the advice above, you defined your own exceptions with parameters, use
<code>Printexc.register_printer</code> to have that information available when they are
uncaught.</p>
Opam 102: Pinning Packageshttps://ocamlpro.com/blog/2024_03_25_opam_102_pinning_packages2024-03-25T08:12:13Z2024-03-25T08:12:13Z
Dario Pinto
Raja Boujbel
Welcome, dear reader, to a new opam blog post! Today we take an additional step down the metaphorical rabbit hole with opam pin, the easiest way to catch a ride on the development version of a package in opam. We are aware that our readers are eager to see these blog posts venture on the developer s...<p></p>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/opam102_pins.svg">
<img alt="Pins standout. They help us anchor interest points, thus helping us focus on what's important. They become the catalyst for experimentation and help us navigating the strong safety features that opam provides users with." src="/blog/assets/img/opam102_pins.svg"/>
</a>
<div class="caption">
Pins standout. They help us anchor interest points, thus helping us focus on what's important. They become the catalyst for experimentation and help us navigating the strong safety features that opam provides users with.
</div>
</p>
</div>
</p>
<p>Welcome, dear reader, to a new opam blog post!</p>
<p>Today we take an additional step down the metaphorical rabbit hole with <code>opam pin</code>, the easiest way to catch a ride on the development version of a package
in <code>opam</code>.</p>
<p>We are aware that our readers are eager to see these blog posts venture on the
developer side of the <code>opam</code> experience, and so are we, but we need to spend
just a bit little more time on the beginner and user-side of it for now so
please, bear with us! 🐻</p>
<blockquote>
<p>This tutorial is the second one in this on-going series about the OCaml
package manager <code>opam</code>.
Be sure to read <a href="https://ocamlpro.com/blog/2024_01_23_opam_101_the_first_steps/">the first one</a> to get
up to speed.
Also, check out each article's <code>tags</code> to get an idea of the entry level
required for the smoothest read possible!</p>
</blockquote>
<blockquote>
<p><strong>New to the expansive OCaml sphere?</strong>
As said on the official opam website,
<a href="https://opam.ocaml.org/about.html#A-little-bit-of-History"><code>opam</code></a> has been a
game changer for the OCaml distribution, since it first saw the day of light
here, almost a decade ago.</p>
</blockquote>
<p><div id="tableofcontents">
<strong>Table of contents</strong></p>
<ul>
<li><a href="#opampincontext">Tutorial context</a>
</li>
<li><a href="#opampinusecase">Use-case for <code>opam pin</code></a>
<ul>
<li><a href="#opampindev">Pinning a released package development version: <code>opam pin add --dev-repo</code></a>
</li>
<li><a href="#opampinurl">Pinning an unreleased package development version: <code>opam pin add <url></code></a>
</li>
</ul>
</li>
<li><a href="#opampinoptions">Dig into opam pin, find spicy features</a>
<ul>
<li><a href="#noaction">Add a pin without installing with <code>--no-action</code></a>
</li>
<li><a href="#updatepins">Update your pinned packages</a>
</li>
<li><a href="#unpin">Unpin packages</a>
<ul>
<li><a href="#releasedpins">Released packages</a>
</li>
<li><a href="#unreleasedpins">Unreleased packages</a>
</li>
<li><a href="#unpinnoaction">Unpin but do no action</a>
</li>
</ul>
</li>
<li><a href="#multiple">One URL to pin them all: handling a multi-package repository</a>
</li>
<li><a href="#version">Setting arbitrary version numbers, toying with fire</a>
</li>
<li><a href="#morefire">Setting multiple arbitrary version numbers</a>
</li>
</ul>
</li>
<li><a href="#conclusion">Conclusion</a>
</div>
</li>
</ul>
<h2>
<a id="opampincontext" class="anchor"></a><a class="anchor-link" href="#opampincontext">Tutorial context and basis</a>
</h2>
<p>As far as context goes for this article, we will consider that you already are
familiar with the concepts introduced in our tutorial <a href="https://ocamlpro.com/blog/2024_01_23_opam_101_the_first_steps/">opam
101</a>.</p>
<p>Your current environment should thus be somewhat similar to the one we had by
the end of that tutorial. Meaning: your version of <code>opam</code> is a least <code>2.1.5</code>
(all outputs were generated with this version), you have already launched <code>opam init</code>, created a global switch <code>my-switch</code> and, possibly, you have even
populated it with a few packages with a few calls to the <code>opam install</code>
command.</p>
<p>Furthermore, keep in mind that, in this blog post, we are approaching this
subject from the perspective of a developer who is looking into integrating new
packages to his current workload, not from the perspective of someone who is
looking into sharing a project or publishing a new software.</p>
<p><code>opam pin</code> is a feature that will quickly become necessary for you to use as
you continue your exploration of <code>opam</code>. It allows for the user to <strong>pin</strong> a
given package to a specific version, or even change the source from which said
package is pulled, installed, and synchronised with from within your currently
active <code>switch</code>.</p>
<p>This feature shines the most in contexts such as:</p>
<ul>
<li>when doing ordinary <code>switch</code> management;
</li>
<li>for incorporating external, <em>still under-construction</em>, libraries to your own
current workload;
</li>
<li>when designing a specific <code>switch</code>: pinning a specific package version
will make it the main compatibility constraint for that switch, thus
tailoring the environment around it in the process.
</li>
</ul>
<blockquote>
<p><strong>Reminder</strong></p>
<p>Remember that <code>opam</code>'s command-line interface is beginner friendly. You can,
at any point of your exploration, use the <code>--help</code> option to have every
command and subcommand explained. You may also check out the <a href="https://ocamlpro.github.io/ocaml-cheat-sheets/ocaml-opam.pdf">opam
cheat-sheet</a>
that was released a while ago and still holds some precious insights on
opam's CLI.</p>
</blockquote>
<h2>
<a id="opampinusecase" class="anchor"></a><a class="anchor-link" href="#opampinusecase">Use-case for <code>opam pin</code></a>
</h2>
<p>Now onto today's use-cases for <code>opam pin</code>, the premise is as follows:</p>
<p>The package on which your current development depends on has just had a major
update on its <em>development</em> branch. This package is available on the opam <code>repository</code>
and its name is <a href="https://ocaml.org/p/hc/latest"><code>hc</code></a>.</p>
<p>That update introduced a new feature that you would very much like to
experiment with for your own on-going project.</p>
<p>However, that feature is still very much a <em>work-in-progress</em> and the
maintainers of <code>hc</code> are <strong>not</strong> about to release their package anytime soon...</p>
<p>That's when <code>opam pin</code> comes in. In this article, we will cover two similar
use-cases for <code>opam pin</code>, namely the one dealing with pinning a version of a
package that is already available on the <a href="https://opam.ocaml.org/packages/">opam <code>repository</code></a>, and that of pinning
a version of an unreleased package, directly from its public URL.</p>
<p>After all the basics have been laid out, we will eventually cover some of the
more underground ⛏ and dangerous 🔥 features available when pinning packages.</p>
<blockquote>
<p><strong>Important Notice</strong></p>
<p>For the sake of convenience and brevity, we will breakdown the <code>opam pin</code>
command, and some of its options, by only dealing with addresses that obey
the classic definition of the word <strong>URL</strong>.</p>
<p>However do keep in mind that <code>opam</code> uses <a href="https://opam.ocaml.org/doc/Manual.html#URLs">a broader definition
</a> for that word, going as far as
to consider a filesystem path to be a valid string for a <strong>URL</strong> argument,
thus allowing all <code>opam pin</code> calls and options to be valid when manipulating
<code>opam</code> packages inside a local filesystem or local network instead of <strong>just</strong> on the web.</p>
</blockquote>
<h3>
<a id="opampindev" class="anchor"></a><a class="anchor-link" href="#opampindev">Pinning the dev version of a released package: <code>opam pin add --dev-repo</code></a>
</h3>
<p>Picking up from the base context: our project depends on <code>hc</code>, and <code>hc</code> has
just received an update. The first option available for us to access this fresh
update on the <code>hc</code> repository is to use <code>opam pin add --dev-repo <pkg></code>
command.</p>
<pre><code class="language-shell-session">$ opam pin add --dev-repo hc
[hc.0.3] synchronised (git+https://git.zapashcanon.fr/zapashcanon/hc.git)
hc is now pinned to git+https://git.zapashcanon.fr/zapashcanon/hc.git (version 0.3)
The following actions will be performed:
∗ install dune 3.14.0 [required by hc]
∗ install hc 0.3*
===== ∗ 2 =====
Do you want to continue? [Y/n] y
<><> Processing actions <><><><><><><><><><><><><><><><><><><><><><><>
⬇ retrieved hc.0.3 (no changes)
⬇ retrieved dune.3.14.0 (https://opam.ocaml.org/cache)
∗ installed dune.3.14.0
∗ installed hc.0.3
Done.
</code></pre>
<hr />
<h2>So what exactly did <code>opam pin</code> do here?</h2>
<pre><code class="language-shell-session">$ opam pin add --dev-repo hc
[hc.0.3] synchronised (git+https://git.zapashcanon.fr/zapashcanon/hc.git)
</code></pre>
<p>When you feed a package name to the <code>opam pin add --dev-repo</code> command, it will
first retrieve the package definition found inside the <a href="https://github.com/ocaml/opam-repository/blob/master/packages/hc/hc.0.3/opam"><code>opam file</code></a>
in the directory of the corresponding package on the <a href="https://github.com/ocaml/opam-repository">the Official OCaml opam
<code>repository</code></a> or any other opam
<code>repositories</code> that your local <code>opam</code> installation happens to be synchronised
with.</p>
<p>You can inspect said package definition directly yourself with the <code>opam show <pkg></code> command.</p>
<p>Let's take a look at the package definition for <code>hc</code>:</p>
<pre><code class="language-shell-session">$ opam show hc
<><> hc: information on all versions ><><><><><><><><><><><><><><><><>
name hc
all-versions 0.0.1 0.2 0.3
<><> Version-specific details <><><><><><><><><><><><><><><><><><><><>
version 0.3
repository default
url.src "https://git.zapashcanon.fr/zapashcanon/hc/archive/0.3.tar.gz"
url.checksum
"sha256=61b443056adec3f71904c5775b8521b3ac8487df618a8dcea3f4b2c91bedc314"
"sha512=a1d213971230e9c7362749d20d1bec6f5e23af191522a65577db7c0f9123ea4c0fc678e5f768418d6dd88c1f3689a49cf564b5c744995a9db9a304f4b6d2c68a"
homepage "https://git.zapashcanon.fr/zapashcanon/hc"
doc "https://doc.zapashcanon.fr/hc/"
bug-reports "https://git.zapashcanon.fr/zapashcanon/hc/issues"
dev-repo "git+https://git.zapashcanon.fr/zapashcanon/hc.git"
authors "Léo Andrès <contact@ndrs.fr>"
maintainer "Léo Andrès <contact@ndrs.fr>"
license "ISC"
depends "dune" {>= "3.0"} "ocaml" {>= "4.14"} "odoc" {with-doc}
synopsis Hashconsing library
description hc is an OCaml library for hashconsing. It provides
easy ways to use hashconsing, in a type-safe and
modular way and the ability to get forgetful
memoïzation.
</code></pre>
<p>Here, you can see the <code>dev-repo</code> field which contains the URL of the
development repository of that package. Opam will use that information to
retrieve package sources for you.</p>
<hr />
<pre><code class="language-shell-session">hc is now pinned to git+https://git.zapashcanon.fr/zapashcanon/hc.git (version 0.3)
</code></pre>
<p>Once it has retrieved <code>hc</code> sources, opam will then store the status of the pin
internally, which is that <code>hc</code> is <em>git pinned</em> to url
<code>git.zapashcanon.fr/zapashcanon/hc</code> at version <code>0.3</code>.</p>
<pre><code class="language-shell-session">$ opam pin list
hc.0.3 git git+https://git.zapashcanon.fr/zapashcanon/hc.git
</code></pre>
<blockquote>
<p><strong>Did you know?</strong>
The default behaviour for <code>opam pin</code> is the <code>list</code> option. The option to see
all pinned packages in the current active switch.</p>
<p>On the other hand, the default behaviour for <code>opam pin <target></code> command is the
<code>add</code> option. Keep it in mind if you happen to grow tired of typing <code>opam pin add <target></code> every time.</p>
</blockquote>
<hr />
<p>Opam will then analyse <code>hc</code> dependencies and compute a solution that respects
the dependencies constraints and state of your current switch (i.e. the
compatibility constraints between the packages currently installed in your
switch).</p>
<p>If it manages to do so, it will come forth with a prompt to install the pinned
package and its dependencies.</p>
<pre><code class="language-shell-session">The following actions will be performed:
∗ install dune 3.14.0 [required by hc]
∗ install hc 0.3*
===== ∗ 2 =====
Do you want to continue? [Y/n] y
</code></pre>
<p>Pressing <code>Enter</code> or <code>y + Enter</code> will perform the installation.</p>
<blockquote>
<p>Notice that sometimes a <code>*</code> character is found next to some package actions? It's
the shorthand signal that the package is pinned, you can get that information
at a quick glance when <code>opam</code> outputs the actions to perform for you if you know
what to look for.</p>
</blockquote>
<pre><code class="language-shell-session"><><> Processing actions <><><><><><><><><><><><><><><><><><><><><><><>
⬇ retrieved hc.0.3 (no changes)
⬇ retrieved dune.3.14.0 (https://opam.ocaml.org/cache)
∗ installed dune.3.14.0
∗ installed hc.0.3
Done.
</code></pre>
<p>Congratulations, you now have a pinned <em>development</em> version of the <code>hc</code> package. You
can now start exploring the neat feature you have been looking forward to!</p>
<h3>
<a id="opampinurl" class="anchor"></a><a class="anchor-link" href="#opampinurl">Pinning the dev version of an unreleased package: <code>opam pin add <url></code></a>
</h3>
<p>Every once in a while on your OCaml journey, you will come across unreleased
software.</p>
<p>These OCaml programs and libraries can still very much have active repositories
but their maintainers have not yet gone as far as to release them in order to
distribute their work through <code>opam</code> to the rest of the OCaml ecosystem.</p>
<p>Yet, you might still want to have seamless access to these software solutions
on your local <code>opam</code> installation for your own personal enjoyment and
developments. That's when <code>opam pin add <url></code> comes in handy.</p>
<p>Modern OCaml projects will most often have one or several <code>opam files</code> in their
tree which <code>opam</code> can operate with.</p>
<pre><code class="language-shell-session">$ opam pin git+https://github.com/rjbou/opam-otopop
Package opam-otopop does not exist, create as a NEW package? [Y/n] y
opam-otopop is now pinned to git+https://github.com/rjbou/opam-otopop (version 0.1)
The following actions will be performed:
∗ install opam-client 2.0.10 [required by opam-otopop]
∗ install opam-otopop 0.1*
===== ∗ 2 =====
Do you want to continue? [Y/n] y
<><> Processing actions <><><><><><><><><><><><><><><><><><><><><><><>
⬇ retrieved opam-client.2.0.10 (https://opam.ocaml.org/cache)
∗ installed opam-client.2.0.10
∗ installed opam-otopop.0.1
Done.
</code></pre>
<p>As you can see, the course of an <code>opam pin add <url></code> call is very close to
that of an <code>opam pin add --dev-repo <pkg></code>, the only exception being the
following line:</p>
<pre><code class="language-shell-session">Package opam-otopop does not exist, create as a NEW package? [Y/n] y
</code></pre>
<p>Since the package is unavailable on the opam <code>repositories</code> that your <code>opam</code>
installation is synchronised with, <code>opam</code> doesn't know about it.</p>
<p>That's why it will ask you if you want to <code>create it as a NEW package</code>.</p>
<p>Once pinned, that package is available in your switch as any other ordinarily
available <code>repository</code> package.</p>
<hr />
<p>You can see here that <code>opam</code> has pinned the <code>opam-otopop</code> package to a specific
<code>0.1</code> version.</p>
<pre><code class="language-shell-session">opam-otopop is now pinned to git+https://github.com/rjbou/opam-otopop (version 0.1)
</code></pre>
<p>The reason for that is found inside the <a href="https://github.com/rjbou/opam-otopop/blob/master/opam-otopop.opam#L2"><code>opam file</code></a> at
the root of the source repository for that package:</p>
<pre><code class="language-shell-session">version: "0.1"
</code></pre>
<p>In any instance where this specific field is not found in the <code>opam file</code>, the
version name would then be pinned to the verbatim <code>~dev</code> version.</p>
<h2>
<a id="opampinoptions" class="anchor"></a><a class="anchor-link" href="#opampinoptions">Dig into opam pin, find spicy features</a>
</h2>
<h3>
<a id="noaction" class="anchor"></a><a class="anchor-link" href="#noaction">Add a pin without installing with <code>--no-action</code></a>
</h3>
<p>Here are the two main use-cases for a call to <code>opam pin</code> with the <code>--no-action</code>
option:</p>
<ul>
<li>You <strong>don't</strong> want to install a package immediately, but <strong>do</strong> want to
inform <code>opam</code> of its existence to allow <code>opam</code> to keep the compatibility
constraints of that specific package in the equation whenever you are
undertaking operations that would require such calculations;
</li>
<li>You just want to be assured that your package will be synchronised with the
right sources;
</li>
</ul>
<p><code>--no-action</code> will only perform the first actions of an <code>opam pin</code> call and
will quit <strong>before</strong> installing the package, it can be used with all pin
subcommands.</p>
<pre><code class="language-shell-session">$ opam pin add hc --dev-repo --no-action
[hc.0.3] synchronised (git+https://git.zapashcanon.fr/zapashcanon/hc.git)
hc is now pinned to git+https://git.zapashcanon.fr/zapashcanon/hc.git (version 0.3)
$
</code></pre>
<h3>
<a id="updatepins" class="anchor"></a><a class="anchor-link" href="#updatepins">Update your pinned packages</a>
</h3>
<p>There are two ways to go about updating and upgrading your pinned packages.
They are the same no matter if you used the <code>--dev-repo</code> option, or <code><url></code>
argument, or any other method for pinning them.</p>
<p>The first one you may consider is to either install, or reinstall the specific
package(s). The reason is that <code>opam</code> will always first synchronise with the
linked source, and then proceed to recompiling.</p>
<pre><code class="language-shell-session">$ opam install opam-otopop
<><> Synchronising pinned packages ><><><><><><><><><><><><><><><><><><><><><><>
[opam-otopop.0.1] synchronised (git+https://github.com/rjbou/opam-otopop#master)
The following actions will be performed:
↻ recompile opam-otopop 0.1*
<><> Processing actions <><><><><><><><><><><><><><><><><><><><><><><><><><><><>
⊘ removed opam-otopop.0.1
∗ installed opam-otopop.0.1
Done.
</code></pre>
<p>In the above code block, <code>opam-otopop</code> has been upgraded by that <code>opam install</code>
call.</p>
<p>The second method is to use the specific <code>opam update</code> and <code>opam upgrade</code>
mechanisms. These commands are very common in an <code>opam</code> abiding workflow. Their
general usage was briefly mentioned in our article <a href="https://ocamlpro.com/blog/2024_01_23_opam_101_the_first_steps/#packages">opam
101</a>.</p>
<p>By default, <code>opam update</code> updates the state of your opam <code>repositories</code>, for
you to have access to the most recent version of your packages. If you add the
<code>--development</code> flag to it, it will also update the source code of your pinned
packages internally.</p>
<pre><code class="language-shell-session">$ opam update --development
<><> Synchronising development packages <><><><><><><><><><><><><><><><><><><><>
[opam-otopop.0.1] synchronised (git+https://github.com/rjbou/opam-otopop#master)
Now run 'opam upgrade' to apply any package updates.
</code></pre>
<p>Then you run <code>upgrade</code> as you would in any other package upgrade scenario.</p>
<pre><code class="language-shell-session">$ opam upgrade
The following actions will be performed:
↻ recompile opam-otopop 0.1* [upstream or system changes]
<><> Processing actions <><><><><><><><><><><><><><><><><><><><><><><><><><><><>
⊘ removed opam-otopop.0.1
∗ installed opam-otopop.0.1
Done.
</code></pre>
<h3>
<a id="unpin" class="anchor"></a><a class="anchor-link" href="#unpin">Unpin packages</a>
</h3>
<p>When you are done with your experimentation and wish to remove a pinned
package, you can simply call the <code>remove</code> subcommand.</p>
<blockquote>
<p>Keep in mind that <code>opam unpin</code> is an alias for <code>opam pin remove</code>.</p>
</blockquote>
<p>The behaviour of <code>opam unpin</code> is slightly different between released and
unreleased packages.</p>
<h4>
<a id="releasedpins" class="anchor"></a><a class="anchor-link" href="#releasedpins">Released packages</a>
</h4>
<p>If the pinned package is released, by default, <code>opam</code> will retrieve and install
the released version of the package instead of removing that package
altogether.</p>
<pre><code class="language-shell-session">$ opam pin list
hc.0.3 git git+https://git.zapashcanon.fr/zapashcanon/hc.git
</code></pre>
<pre><code class="language-shell-session">$ opam list hc
# Packages matching: name-match(hc) & (installed | available)
# Package # Installed # Synopsis
hc.0.3 0.3 pinned to version 0.3 at git+https://git.zapashcanon.fr/zapashcanon/hc.git
</code></pre>
<pre><code class="language-shell-session">$ opam pin remove hc
Ok, hc is no longer pinned to git+https://git.zapashcanon.fr/zapashcanon/hc.git (version 0.3)
The following actions will be performed:
↻ recompile hc 0.3
Do you want to continue? [Y/n] y
<><> Processing actions <><><><><><><><><><><><><><><><><><><><><><><><><><><><>
⬇ retrieved hc.0.3 (https://opam.ocaml.org/cache)
⊘ removed hc.0.3
∗ installed hc.0.3
Done.
</code></pre>
<pre><code class="language-shell-session">$ opam list hc
# Packages matching: name-match(hc) & (installed | available)
# Package # Installed # Synopsis
hc.0.3 0.3 Hashconsing library
</code></pre>
<p>As we can see in the details:</p>
<pre><code class="language-shell-session">⬇ retrieved hc.0.3 (https://opam.ocaml.org/cache)
</code></pre>
<p><code>opam</code> has retrieved the sources from the archive that is specified in the
<code>opam file</code> of the relevant opam <code>repository</code>, thus pulling <code>hc</code> back down to
its latest available, <em>current-switch compatible</em>, release.</p>
<blockquote>
<p>Notice the absence of the <code>*</code> character next to the package action? It
means the package is no longer pinned.</p>
</blockquote>
<h4>
<a id="unreleasedpins" class="anchor"></a><a class="anchor-link" href="#unreleasedpins">Unreleased packages</a>
</h4>
<p>On the other hand, an unreleased package, since its only definition
source—meaning both the <strong>location of its source code</strong> as well as <strong>all
information required for <code>opam</code> to operate</strong>, found in the corresponding <code>opam file</code>—<strong>is</strong> the pin itself, <code>opam</code> will have no other choice than to offer to
remove it for you.</p>
<pre><code class="language-shell-session">$ opam pin list
opam-otopop.0.1 git git+https://github.com/rjbou/opam-otopop#master
</code></pre>
<p>In this case, <code>opam unpin <package-name></code> (or idempotently: <code>opam pin remove <package-name></code>) launches an <code>opam remove</code> action:</p>
<pre><code class="language-shell-session">$ opam pin remove opam-otopop
Ok, opam-otopop is no longer pinned to git+https://github.com/rjbou/opam-otopop#master (version 0.1)
The following actions will be performed:
⊘ remove opam-otopop 0.1
Do you want to continue? [Y/n] y
<><> Processing actions <><><><><><><><><><><><><><><><><><><><><><><><><><><><>
⊘ removed opam-otopop.0.1
Done.
</code></pre>
<h4>
<a id="unpinnoaction" class="anchor"></a><a class="anchor-link" href="#unpinnoaction">Unpin but do no action</a>
</h4>
<p>Just like with the <code>opam pin add</code> command, the <code>--no-action</code> option is
available when removing pins. It will <strong>only</strong> unpin the package, without
removing it, or recompiling it.</p>
<pre><code class="language-shell-session">$ opam pin remove opam-otopop --no-action
Ok, opam-otopop is no longer pinned to git+https://github.com/rjbou/opam-otopop#master (version 0.1)
$ opam list opam-otopop
# Packages matching: name-match(opam-otopop) & (installed | available)
# Package # Installed # Synopsis
opam-otopop.0.1 0.1 An opam-otopop package
</code></pre>
<p>You may use it for removing the <code>pin</code> from a package while still keeping it
installed in your <code>switch</code>, or replacing it by its opam <code>repository</code> definition
version.</p>
<p>The resulting package remains linked to its URL, but it is not considered as
pinned, so there will be no update or automatic syncing to follow the changes
of the upstream branch.</p>
<p>You may also consider this feature to prepare a specific action, say, as a
temporary state. For example, you could unpin several packages in a row, and
then proceed to recompiling the whole batch in one go.</p>
<h3>
<a id="multiple" class="anchor"></a><a class="anchor-link" href="#multiple">One URL to pin them all: handling a multi-package repository</a>
</h3>
<p>Every example seen so far had but one <code>opam file</code> at the root of their
respective work tree (sometimes in a specific <code>opam/</code> directory).</p>
<p>Yet it is possible for some projects to have several packages distributed by a
single repository. An example of this would be the
<a href="https://github.com/ocaml/opam">opam project source repository itself</a>. If
that is the case, and you pin that URL, the default behaviour is that all the
packages defined at that address will be pinned.</p>
<p>Let's take <a href="https://github.com/OCamlPro/ocp-index">this project</a>.</p>
<p>You can see that several packages are defined: <code>ocp-index</code> and <code>ocp-browser</code>.</p>
<p>Here's how a <code>pin</code> action behaves when given that URL:</p>
<pre><code class="language-shell-session">$ opam pin add git+https://github.com/OCamlPro/ocp-index
This will pin the following packages: ocp-browser, ocp-index.
Continue? [Y/n] y
ocp-browser is now pinned to git+https://github.com/OCamlPro/ocp-index (version 1.3.6)
ocp-index is now pinned to git+https://github.com/OCamlPro/ocp-index (version 1.3.6)
The following actions will be performed:
∗ install ocp-indent 1.8.1 [required by ocp-index]
∗ install ocp-index 1.3.6*
∗ install ocp-browser 1.3.6*
===== ∗ 3 =====
Do you want to continue? [Y/n] y
<><> Processing actions <><><><><><><><><><><><><><><><><><><><><><><>
⬇ retrieved ocp-indent.1.8.1 (https://opam.ocaml.org/cache)
∗ installed ocp-indent.1.8.1
∗ installed ocp-index.1.3.6
∗ installed ocp-browser.1.3.6
Done.
</code></pre>
<p>As you can see, this process is exactly the same as before, but with 3 packages
in one go.</p>
<p><strong>What if I do not want to pin every package in that repository?</strong></p>
<p>Easy: if you just need one of the packages found at that URL, you can just feed
that package name to the <code>opam pin add <package-name> <url></code> CLI call, just
like we did at the beginning of this tutorial!</p>
<pre><code class="language-shell-session">$ opam pin add ocp-index git+https://github.com/OCamlPro/ocp-index
[ocp-index.1.3.6] synchronised (git+https://github.com/OCamlPro/ocp-index)
ocp-index is now pinned to git+https://github.com/OCamlPro/ocp-index (version 1.3.6)
The following actions will be performed:
∗ install ocp-indent 1.8.1 [required by ocp-index]
∗ install ocp-index 1.3.6*
===== ∗ 2 =====
Do you want to continue? [Y/n] y
<><> Processing actions <><><><><><><><><><><><><><><><><><><><><><><>
⬇ retrieved ocp-indent.1.8.1 (cached)
∗ installed ocp-indent.1.8.1
∗ installed ocp-index.1.3.6
Done.
</code></pre>
<p>If you do not know the exact names of these different packages, you may also
consider using the very handy <code>opam pin scan</code> command which will lookup the
contents repository at the URL and list its <code>opam</code> packages for you:</p>
<pre><code class="language-shell-session">$ opam pin scan git+https://github.com/OCamlPro/ocp-index
# Name # Version # Url
ocp-index 1.3.6 git+https://github.com/OCamlPro/ocp-index
ocp-browser 1.3.6 git+https://github.com/OCamlPro/ocp-index
</code></pre>
<h3>
<a id="version" class="anchor"></a><a class="anchor-link" href="#version">Setting arbitrary version numbers, toying with fire</a>
</h3>
<p>As demonstrated <a href="#opampinurl">earlier</a>, <code>opam</code> will choose a version of the
pinned package according to the contents of the <code>opam file</code>.</p>
<p>The important thing to take away from that is, in most usual scenarios, the
contents of the <code>opam file</code> are paramount to how <code>opam</code> will calculate
compatibility constraints in a given <code>switch</code>.</p>
<p>It is <strong>from</strong> the information that is hardcoded <strong>inside</strong> the <code>opam file</code>
that <code>opam</code> will be able to take educated decisions whenever changes to the
state of your current <code>switch</code> are to be made. There is a way, however, to
circumvent that behaviour, that we want to inform you of, even if it entails a
bit of precaution.</p>
<blockquote>
<p>Naturally, directly tinkering with such a key stability feature like
<code>compatibility constraints solving</code> does require you to <strong>tread carefully</strong>.
We will see together some of the pitfalls and things to do that will keep you
from finding yourself in confusing situations in regards to the state of your
<code>switch</code> and the dependencies within it.</p>
</blockquote>
<p><strong>Ready? Lets get acquainted with our first slightly <em>dangerous</em> <code>opam</code>
feature:</strong></p>
<p>You are allowed to append an <strong>arbitrary</strong> version number to the name of the
pinned package for <code>opam</code> to incorporate in its calculations, as seen in the
following code block:</p>
<pre><code class="language-shell-session">$ opam pin add directories.1.0 git+https://github.com/ocamlpro/directories --no-action
[directories.1.0] synchronised (git+https://github.com/ocamlpro/directories)
directories is now pinned to git+https://github.com/ocamlpro/directories (version 1.0)
</code></pre>
<p>In this specific example, package
<a href="https://github.com/ocaml/opam-repository/blob/master/packages/directories/directories.0.5/opam"><code>directories</code></a>
is available in the opam
<a href="https://ocaml.org/p/directories/latest"><code>repository</code></a>, that our <code>opam</code>
installation is synchronised with. However, there is no such <code>1.0</code> version in
that <code>repository</code>. Not a single reference to such a version number can be found
at that address, neither in the <code>tags</code>, nor <code>releases</code> of the repository, and
not even in the <a href="https://github.com/OCamlPro/directories/blob/master/directories.opam"><code>opam file</code></a>.</p>
<pre><code class="language-shell-session">$ opam show directories --field all-versions
0.1 0.2 0.3 0.4 0.5
</code></pre>
<p>What we have done here is effectively telling <code>opam</code> that <code>directories</code> is at
a different version number than it <strong>actually</strong> is in the most purely technical
aspect...</p>
<p><strong>But why would we want to do such a thing?</strong></p>
<hr />
<p>Let's consider a reasonable use-case for <code>opam pin add <package>.<my-version-number> <url></code>:</p>
<p>You have been working on a project called <code>my-project</code> for some time and you
are using a package named <code>fst-dep</code> for your development.</p>
<p>Below, you will find an excerpt of the <code>fst-dep.opam</code> file, specifically its
dependencies:</p>
<pre><code class="language-shell-session">depends: [
"dep-to-try" { <= "3.0.0" }
"other-dep"
]
</code></pre>
<p>All three packages (<code>fst-dep</code>,<code>dep-to-try</code> and <code>other-dep</code>) are
installed in your current switch and are available on your favourite opam
<code>repository</code>.</p>
<p>One day you go about checking the repository for each dependency, and you find
that <code>dep-to-try</code> has just had one of its main features <strong>reimplemented</strong>,
improved and optimised, they are preparing to release a <code>4.0.0</code> version soon.</p>
<p>See, these changes would have been available for you to fetch directly from
it's <em>development</em> repository had you been working with it directly, but you
are not. It is up to the maintainers of <code>fst-dep</code> to do that work.</p>
<p>Since you have no ownership over any of these dependencies. You have no way of
changing any of the version constraints in this tiny dependency tree that
ranges from <code>fst-dep</code> and upwards.</p>
<p><strong>Here are the three mainstream solutions to this problem:</strong></p>
<ul>
<li>Wait for both packages to publish new releases. A new official release from
the <code>dep-to-try</code> team, which would ship said reimplementation, and another
from the <code>fst-dep</code> team which would update its dependency tree to include
<code>dep-to-try</code>'s latest version. Needless to say that this could take an
arbitrary amount of time which is unsatisfying at best.
</li>
<li>Another suboptimal solution would be to copy the current state of the
entire opam <code>repository</code> relevant to your package distribution, go to the
corresponding directory for <code>fst-dep</code> inside that <code>repository</code>, relax the hard
dependency <code>"dep-to-try" { <= "3.0.0" }</code> and reinstall all the packages that
are directly or indirectly affected by that change. A very time consuming
task for such a small edit to the global dependency tree.
</li>
<li>Last option would be to pin <code>fst-dep</code>, then go about manually editing the
dependencies of <code>fst-dep</code> with the <code>opam pin --edit</code> option to relax the
dependency. The only pitfall with this solution is that, in a context
where <code>dep-to-try</code> is a <strong>key</strong> package in the OCaml distribution, and many
other packages depend on it as well, you might have to do <strong>a lot</strong> of
editing to make your <code>switch</code> a stable environment with all dependency
constraints met...
</li>
</ul>
<p>So neither of these solutions fit our needs. They are all unsatisfactory at
best and even counter-productive at worst.</p>
<p><strong>That's when <code>arbitrary version pinning</code> shines.</strong></p>
<p>The main benefit of this feature is that it allows for added flexibility in
navigating and tweaking the compatibility tree of any opam <code>repository</code> at the
<code>switch</code>-level. It provides the user with ways to circumvent all tasks
pertaining to a larger operation on the global graph of packages.</p>
<pre><code class="language-shell-session">$ opam pin dep-to-try.3.0.0 git+https://github.com/OCamlPro/dep-to-try
[dep-to-try.3.0.0] synchronised (file:///home/rjbou/ocamlpro/opam_bps_examples/dep-to-try)
dep-to-try is now pinned to git+https://github.com/OCamlPro/dep-to-try#master (version 3.0.0)
</code></pre>
<p><code>opam</code> will still think that <code>dep-to-try</code>'s version is valid (<code>{ <= "3.0.0"}</code>),
even if you are synchronised with the state of its <em>development</em> branch, thus giving
you access to the latest changes with the minimal amount of manual editing
required. Pretty neat, right?</p>
<p>Now, onto the pitfalls that you should keep in mind when tinkering with your
dependencies like that.</p>
<p><strong>What kind of predicament awaits you?</strong></p>
<ol>
<li>You could introduce unforeseen behaviours. This could be anything from
errors at compile-time, if <code>dep-to-try</code>'s interfaces have changed
significantly, to runtime crashes if you're unlucky.
</li>
<li>Another source of confusion could arise if you happen to use the <code>opam unpin dep-to-try --no-action</code> command on such a package. After unpinning it,
there's a chance that you would later forget it used to be pinned to a <em>development</em>
version. There would be little to no way for you to remember which package
it was that you had experimented with at some point. You would either have
to inspect all you installed packages or even remake a <code>switch</code> from scratch
which would not be affected by your reckless <code>arbitrary version pinning</code> and
would work just fine after that.
</li>
</ol>
<p>Our advice is rather simple: use this feature with discretion and try to avoid
unpinning packages if it's not to reinstall or remove them altogether. If you
follow these instructions, you <strong>should</strong> be safe...</p>
<h3>
<a id="morefire" class="anchor"></a><a class="anchor-link" href="#morefire">Setting multiple arbitrary version numbers</a>
</h3>
<p>One last bit of black magic for you to play around with.</p>
<p>Instead of pinning <code>package-name.my-version-number</code>, you may use the
<code>--with-version</code> option to pin packages at that URL to an arbitrary version. A
key detail is that it is compatible with multiple opam file pinning... Just
keep in mind that all the pitfalls mentioned previously apply here too, only
with multiple packages at once, which could make it more confusing.</p>
<p>Below, you can see that we are setting <strong>all</strong> the packages found in that
repository to the same version:</p>
<pre><code class="language-shell-session">$ opam pin add git+https://github.com/OCamlPro/ocp-index --with-version 2.0.0
This will pin the following packages: ocp-browser, ocp-index.
Continue? [Y/n] y
ocp-browser is now pinned to git+https://github.com/OCamlPro/ocp-index (version 2.0.0)
ocp-index is now pinned to git+https://github.com/OCamlPro/ocp-index (version 2.0.0)
The following actions will be performed:
∗ install ocp-indent 1.8.1 [required by ocp-index]
∗ install ocp-index 2.0.0*
∗ install ocp-browser 2.0.0*
===== ∗ 3 =====
Do you want to continue? [Y/n] y
<><> Processing actions <><><><><><><><><><><><><><><><><><><><><><><>
⬇ retrieved ocp-indent.1.8.1 (cached)
⬇ retrieved ocp-index.2.0.0 (no changes)
⬇ retrieved ocp-browser.2.0.0 (no changes)
∗ installed ocp-indent.1.8.1
∗ installed ocp-index.2.0.0
∗ installed ocp-browser.2.0.0
Done.
</code></pre>
<p>You can see that all these packages are pinned to <code>2.0.0</code> now.</p>
<pre><code class="language-shell-session">$ opam pin list
ocp-browser.2.0.0 git git+https://github.com/OCamlPro/ocp-index
ocp-index.2.0.0 git git+https://github.com/OCamlPro/ocp-index
</code></pre>
<h2>
<a id="conclusion" class="anchor"></a><a class="anchor-link" href="#conclusion">Conclusion</a>
</h2>
<p>Here it is, the <code>opam pin</code> command in most of its glory.</p>
<p>If you have managed to stick this long to read this article, you should no
longer feel confused about pinning projects and should now have another of
<code>opam</code>'s most commonly used feature in your arsenal when tackling your own
development challenges!</p>
<p>So it is that we have learned about pinning both released and unreleased
packages. Additionally, we showcased several features for orthogonal use-cases:
from the more <em>quality of life</em>-oriented calls such as <code>opam show</code> and <code>opam pin scan</code>, to obscure features like arbitrary version pinning as well as
ordinary options like <code>--no-action</code>, <code>--dev-repo</code> and subcommands like <code>opam unpin</code>.</p>
<p>We are steadily approaching a level of familiarity with <code>opam</code> that will
allow us to get into some really neat features soon.</p>
<p>Be sure to stay tuned with our blog, the journey into the rabbit hole has only
started and <code>opam</code> is a deep one indeed!</p>
<hr />
<p>Thank you for reading,</p>
<p>From 2011, with love,</p>
<p>The OCamlPro Team</p>
Flambda2 Ep. 1: Foundational Design Decisionshttps://ocamlpro.com/blog/2024_03_19_the_flambda2_snippets_12024-03-19T08:12:13Z2024-03-19T08:12:13Z
Pierre Chambart
Vincent Laviron
Guillaume Bury
Nathanaëlle Courant
Dario Pinto
Welcome to The Flambda2 Snippets! In this first post of The Flambda2 Snippets, we dive into the powerful CPS-based internal representation used within the Flambda2 optimizer, which was one of the main motivation to move on from the former Flambda optimizer. Credit goes to Andrew Kennedy's paper Comp...<h3>Welcome to <strong>The Flambda2 Snippets</strong>!</h3>
<p>In this first post of <a href="/blog/2024_03_18_the_flambda2_snippets_0/">The Flambda2
Snippets</a>, we
dive into the powerful CPS-based internal representation used within the
<a href="https://github.com/ocaml-flambda/flambda-backend/tree/main/middle_end/flambda2">Flambda2 optimizer</a>,
which was one of the main motivation to move on from the former Flambda optimizer.</p>
<p><strong>Credit goes to Andrew Kennedy's paper <a href="https://www.microsoft.com/en-us/research/wp-content/uploads/2007/10/compilingwithcontinuationscontinued.pdf"><em>Compiling with Continuations,
Continued</em></a>
for pointing us in this direction.</strong></p>
<blockquote>
<p>The <strong>F2S</strong> blog posts aim at gradually introducing the world to the
inner-workings of a complex piece of software engineering: The <code>Flambda2 Optimising Compiler</code>, a technical marvel born from a 10 year-long effort in
Research & Development and Compilation; with many more years of expertise in
all aspects of Computer Science and Formal Methods.</p>
</blockquote>
<p><div id="tableofcontents">
<strong>Table of contents</strong></p>
<ul>
<li><a href="#cps">CPS (Continuation Passing Style)</a>
</li>
<li><a href="#double-barrelled">Double Barrelled CPS</a>
</li>
<li><a href="#term">The Flambda2 Term Language</a>
</li>
<li><a href="#roadmap">Following up</a>
</div>
</li>
</ul>
<hr />
<h2>
<a id="cps" class="anchor"></a><a class="anchor-link" href="#cps">CPS (Continuation Passing Style)</a>
</h2>
<p>Terms in the <code>Flambda2</code> IR are represented in CPS style, so let us briefly
explain what that means.</p>
<p>Some readers may already be familiar with what we call <em>First-Class CPS</em> where
continuations are represented using functions of the language:</p>
<pre><code class="language-ocaml">(* Non-tail-recursive implementation of map *)
let rec map f = function
| [] -> []
| x :: r -> f x :: map f r
(* Tail-recursive CPS implementation of map *)
let rec map_cps f l k =
match l with
| [] -> k []
| x :: r -> let fx = f x in map_cps f r (fun r -> k (fx :: r))
</code></pre>
<p>This kind of transformation is useful to make a recursive function
tail-recursive and sometimes to avoid allocations for functions returning
multiple values.</p>
<p>In <code>Flambda2</code>, we use <em>Second-Class CPS</em> instead, where continuations are
<strong>control-flow constructs in the Intermediate Language</strong>. In practice, this is
equivalent to an explicit representation of a control-flow graph.</p>
<p>Here's an example using some <strong>hopefully</strong> intuitive syntax for the <code>Flambda2</code>
IR.</p>
<pre><code class="language-ocaml">let rec map f = function
| [] -> []
| x :: r -> f x :: map f r
(* WARNING: FLAMBDA2 PSEUDO-SYNTAX INBOUND *)
let rec map
((f : <whatever_type1> ),
(param : <whatever_type2>))
{k_return_continuation : <return_type>}
{
let_cont k_empty () = k_return_continuation [] in
let_cont k_nonempty x r =
let_cont k_after_f fx =
let_cont k_after_map map_result =
let result = fx :: map_result in
k_return_continuation result
in
Apply (map f r {k_after_map})
in
Apply (f x {k_after_f})
in
match param with
| [] -> k_empty ()
| x :: r -> k_nonempty x r
}
</code></pre>
<p>Every <code>let_cont</code> binding declares a new sequence of instructions in the
control-flow graph, which can be terminated either by:</p>
<ul>
<li>calling a continuation (for example, <code>k_return_continuation</code>) which takes a
fixed number of parameters;
</li>
<li>applying an OCaml function (<code>Apply</code>), this function takes as a special
parameter the continuation which it must jump to at the end of its execution.
Unlike continuations, OCaml functions can take a number of arguments that
does not match the number of parameters at their definition;
</li>
<li>branching constructions like <code>match _ with</code> and <code>if _ then _ else _</code>, in
these cases each branch is a call to a (potentially different) continuation;
</li>
</ul>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/flambda2_snippets2_ep1_figure1.png">
<img alt="This image shows the previous code represented as a graph." src="/blog/assets/img/flambda2_snippets2_ep1_figure1.png"/>
</a>
<div class="caption">
This image shows the previous code represented as a graph.
</div>
</p>
</div>
</p>
<blockquote>
<p>Notice that some boxes are nested to represent scoping relations: variables
defined in the outer boxes are available in the inner ones.</p>
</blockquote>
<p>To demonstrate the kinds of optimisations that such control-flow graphs allow
us, see the following simple example:</p>
<p><strong>Original Program:</strong></p>
<pre><code class="language-ocaml">let f cond =
let v =
if cond then
raise Failure
else 0
in
v, v
</code></pre>
<p>We then represent the same program using CPS in two steps, the first is the
direct translation of the original program, the second is an equivalent program
represented in a more compact form.</p>
<p><strong>Minimal CPS transformation, using pseudo-syntax</strong></p>
<pre><code class="language-ocaml">(* showing only the body of f *)
(* STEP 1 - Before graph compaction *)
let_cont k_after_if v =
let result = v, v in
k_return_continuation result
in
let_cont k_then () = k_raise_exception Failure in
let_cont k_else () = k_after_if 0 in
if cond then k_then () else k_else ()
</code></pre>
<p>which becomes after inlining <code>k_after_if</code>:</p>
<pre><code class="language-ocaml">(* STEP 2 - After graph compaction *)
let_cont k_then () = k_raise_exception Failure in
let_cont k_else () =
let v = 0 in
let result = v, v in
k_return_continuation result
in
if cond then k_then () else k_else ()
</code></pre>
<p>This allows us, by using the translation to CPS and back, to transform the
original program into the following:</p>
<p><strong>Optimized original program</strong></p>
<pre><code class="language-ocaml">let f cond =
if cond then
raise Failure
else 0, 0
</code></pre>
<p>As you can see, the original program is simpler now. The nature of the changes
operated on the code are in fact not tied to a particular optimisation but
rather the nature of the CPS transformation itself. Moreover, we do want to
actively perform optimisations and to that extent, having an intermediate
representation that is equivalent to a control-flow graph allows us to benefit
from the huge amount of literature on the subject of static analysis of
imperative programs which often are represented as control-flow graphs.</p>
<p>To be fair, in the previous example, we have cheated in how we have translated
the <code>raise</code> primitive. Indeed we used a simple continuation
(<code>k_raise_exception</code>) but we haven't defined it anywhere prior. This is
possible because our use of Double Barrelled CPS.</p>
<h2>
<a id="double-barrelled" class="anchor"></a><a class="anchor-link" href="#double-barrelled">Double Barrelled CPS</a>
</h2>
<p>In OCaml, all functions can not only return normally (Barrel 1) but also throw
exceptions (Barrel 2), it corresponds to two different paths in the
control-flow and we need the ability to represent it in our own control-flow graph.</p>
<p>Hence the name: <code>Double Barrelled CPS</code>, that we took from <a href="https://web.archive.org/web/20210420165356/https://www.cs.bham.ac.uk/~hxt/research/HOSC-double-barrel.pdf">this
paper</a>,
by Hayo Thielecke. In practice this only has consequences in four places:</p>
<ol>
<li>the function definitions must have two special parameters instead of one:
the exception continuation (<code>k_raise_exception</code>) in addition to the normal
return continuation (<code>k_return_continuation</code>);
</li>
<li>the function applications must have two special arguments, reciprocally;
</li>
<li><code>try ... with</code> terms are translated using regular continuations with the
exception handler (the <code>with</code> path of the construct) compiled to a
continuation handler (<code>let_cont</code>);
</li>
<li><code>raise</code> terms are translated into continuation calls, to either the current
function exception continuation (e.g. in case of uncaught exception) or the
enclosing <code>try ... with</code> handler continuation.
</li>
</ol>
<h2>
<a id="term" class="anchor"></a><a class="anchor-link" href="#term">The Flambda2 Term Language</a>
</h2>
<p>This CPS form has directed the concrete implementation of the FL2 language.</p>
<p>We can see that the previous IRs have very descriptive representations, with
about 20 constructors for <code>Clambda</code> and 15 for <code>Flambda</code> while <code>Flambda2</code> has
regrouped all these features into only 6 categories which are sorted by how
they affect the control-flow.</p>
<pre><code class="language-ocaml">type expr =
| Let of let_expr
| Let_cont of let_cont_expr
| Apply of apply
| Apply_cont of apply_cont
| Switch of switch
| Invalid of { message : string }
</code></pre>
<p>The main benefits we reap from such a strong design choice are that:</p>
<ul>
<li>Code organisation is better: dealing with control-flow is only done when
matching on full expressions and dealing with specific features of the
language is done at a lower level;
</li>
<li>Reduce code duplication: features that behave in a similar way will have
their common code shared by design;
</li>
</ul>
<h2>
<a id="roadmap" class="anchor"></a><a class="anchor-link" href="#roadmap">Following up</a>
</h2>
<p>The goal of this article was to show a fundamental design choice in <code>Flambda2</code>
which is using a CPS-based representation. This design is felt throughout the
<code>Flambda2</code> architecture and will be mentioned and strengthened again in later
posts.</p>
<p><code>Flambda2</code> takes the <code>Lambda</code> IR as input, then performs <code>CPS conversion</code>,
followed by <code>Closure conversion</code>, each of them worth their own blog post, and
this produces the terms in the <code>Flambda2</code> IR.</p>
<p>From there, we have our main optimisation pass that we call <code>Simplify</code> which
first performs static analysis on the term during a single <code>Downwards Traversal</code>,
and then rebuilds an optimised term during the <code>Upwards Traversal</code>.</p>
<p>Once we have an optimised term, we can convert it to the <code>CMM</code> IR and feed it
to the rest of the backend. This part is mostly CPS elimination but with added
original and interesting work we will detail in a specific snippet.</p>
<p>The single-pass design allows us to consider all the interactions between
optimisations</p>
<p>Some examples of optimisations performed during <code>Simplify</code>:</p>
<ul>
<li>Inlining of function calls;
</li>
<li>Constant propagation;
</li>
<li>Dead code elimination
</li>
<li>Loopification, that is transforming tail-recursive functions into loops;
</li>
<li>Unboxing;
</li>
<li>Specialisation of polymorphic primitives;
</li>
</ul>
<p>Most of the following snippets will detail one or several parts of these
optimisations.</p>
<p><strong>Stay tuned, and thank you for reading!</strong></p>
Behind the Scenes of the OCaml Optimising Compiler Flambda2: Introduction and Roadmap https://ocamlpro.com/blog/2024_03_18_the_flambda2_snippets_02024-03-18T08:12:13Z2024-03-18T08:12:13Z
Pierre Chambart
Vincent Laviron
Dario Pinto
Introducing our Flambda2 snippets At OCamlPro, the main ongoing task on the OCaml Compiler is to improve the high-level optimisation. This is something that we have been doing for quite some time now. Indeed, we are the authors behind the Flambda optimisation pass and today we would like to introduc...<p></p>
<h2>
<a id="introduction" class="anchor"></a><a class="anchor-link" href="#introduction">Introducing our Flambda2 snippets</a>
</h2>
<blockquote>
<p>At OCamlPro, the main ongoing task on the OCaml Compiler is to improve the
high-level optimisation. This is something that we have been doing for quite
some time now. Indeed, we are the authors behind the <code>Flambda</code> optimisation
pass and today we would like to introduce the series of blog snippets
showcasing the direct successor to it, the creatively named <code>Flambda2</code>.</p>
</blockquote>
<p>This series of blog posts will cover everything about <code>Flambda2</code>, a
new optimising backend for the OCaml native compiler. This
introductory episode will provide you with some context and history
about <a href="https://github.com/ocaml-flambda/flambda-backend"><code>Flambda2</code></a>
but also about its predecessor <code>Flambda</code> and, of course, the OCaml
compiler!</p>
<p>This work may be considered as a completement to an on-going documentation
effort at OCamlPro as well as to the many different talks we have given last
year on the subject, two of which you can watch online: <a href="https://www.youtube.com/watch?v=eI5GBpT2Brs">OCaml Workshop</a> ( <a href="https://cambium.inria.fr/seminaires/transparents/20230626.Vincent.Laviron.pdf">slideshow</a> ), <a href="https://www.youtube.com/watch?v=PRb8tRfxX3s">ML
Workshop</a> ( <a href="https://cambium.inria.fr/seminaires/transparents/20230828.Vincent.Laviron.pdf">slideshow</a> ).</p>
<p><strong>This work was developed in collaboration with, and funded by Jane Street.
Warm thanks to Mark Shinwell for shepherding the Flambda project and to Ron
Minsky for his support.</strong></p>
<p><div id="tableofcontents">
<strong>Table of contents</strong></p>
<ul>
<li><a href="#introduction">Introduction</a>
</li>
<li><a href="#compiling">Compiling OCaml</a>
</li>
<li><a href="#roadmap">Snippets Roadmap</a>
</li>
<li><a href="#listing">The F2S Series!</a>
</div>
</li>
</ul>
<h2>
<a id="compiling" class="anchor"></a><a class="anchor-link" href="#compiling">Compiling OCaml</a>
</h2>
<p>The compiling of OCaml is done through a multitude of passes (see simplified
representation below), and the bulk of high-level optimisations happens between
the <code>Lambda</code> IR (Intermediate Representation) and <code>CMM</code> (which stands
for <em>C--</em>). This set of optimisations will be the main focus of this series of
snippets.</p>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/flambda2_snippets_ep0_figure3_1.png">
<img alt="The different passes of the OCaml compilers, from sources to executable code, before the addition of <code>Flambda</code>." src="/blog/assets/img/flambda2_snippets_ep0_figure3_1.png"/>
</a>
<div class="caption">
The different passes of the OCaml compilers, from sources to executable code, before the addition of <code>Flambda</code>.
</div>
</p>
</div>
</p>
<p>Indeed, that part of the compiler is quite crowded. Originally, after
the frontend has type-checked the sources, the <code>Closure</code> pass was in
charge of transforming the <code>Lambda</code> IR <a href="https://github.com/ocaml/ocaml/blob/34cf5aafcedc2f7895c7f5f0ac27c7e58e4f4adf/lambda/lambda.mli#L279">(see source
code)</a>
into the <code>Clambda</code> IR <a href="https://github.com/ocaml/ocaml/blob/cce52acc7c7903e92078e9fe40745e11a1b944f0/middle_end/clambda.mli#L57">(see source
code)</a>.
This transformation handles <a href="https://en.wikipedia.org/wiki/Constant_folding"><em>Constant
Propagation</em></a>, some
<a href="https://en.wikipedia.org/wiki/Inline_expansion"><em>inlining</em></a>, and some
<em>Constant Lifting</em> (moving constant structures to static
allocation). Then, a subsequent pass (called <code>Cmmgen</code>) transforms the
<code>Clambda</code> IR into the <code>CMM</code> IR <a href="https://github.com/ocaml/ocaml/blob/cce52acc7c7903e92078e9fe40745e11a1b944f0/asmcomp/cmm.mli#L168">(see source
code)</a></b>
and handles some <a href="https://en.wikipedia.org/wiki/Peephole_optimization">peep-hole
optimisations</a> and
<a href="https://en.wikipedia.org/wiki/Boxing_(computer_science)"><em>unboxing</em></a>. This final representation will be used by architecture-specific
backends to produce assembler code.</p>
<p>Before we get any further into the <strong>hairy</strong> details of <code>Flambda2</code> in the
upcoming snippets, it is important that we address some context.</p>
<p>We introduced the <code>Flambda</code> framework which was <a href="https://blog.janestreet.com/flambda/">released with <code>OCaml 4.03</code></a>. This was a success in improving
<em>inlining</em> and related optimisations, and has been stable ever since,
with very few bug reports.</p>
<p>We kept both <code>Closure</code> and <code>Flambda</code> alive together because some users cared a
lot about the compilation speed of OCaml - <code>Flambda</code> is indeed a bit slower
than <code>Closure</code>.</p>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/flambda2_snippets_ep0_figure3_2.png">
<img alt="<code>Flambda</code> provides an alternative to the classic <code>Closure</code> transformation, with additionnal optimizations." src="/blog/assets/img/flambda2_snippets_ep0_figure3_2.png"/>
</a>
<div class="caption">
<code>Flambda</code> provides an alternative to the classic <code>Closure</code> transformation, with additionnal optimizations.
</div>
</p>
</div>
</p>
<p>Now is time to introduce another choice to both <code>Flambda</code> and <code>Closure</code>:
<code>Flambda2</code>, which is meant to eventually replace <code>Flambda</code> and potentially
<code>Closure</code> as well. In fact, Janestreet has been gradually moving from <code>Closure</code>
and <code>Flambda</code> to <code>Flambda2</code> during the past year and has to this day no more
systems relying on <code>Closure</code> or <code>Flambda</code>.</p>
<blockquote>
<p>You can read more about the transition from staging to production-level
workloads of <code>Flambda2</code> right <a href="https://ocamlpro.com/blog/2023_06_30_2022_at_ocamlpro/#flambda">here</a>.</p>
</blockquote>
<p><code>Flambda</code> is still maintained and will be for the forseeable future. However,
we have noticed some limitations that prevented us from doing some kinds of
optimisations and on which we will elaborate in the following episodes of <em>The
Flambda2 Snippets</em> series.</p>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/flambda2_snippets_ep0_figure3.png">
<img alt="<code>Flambda2</code> provides a much extended alternative to Flambda, from <code>Lambda</code> IR to <code>CMM</code>." src="/blog/assets/img/flambda2_snippets_ep0_figure3.png"/>
</a>
<div class="caption">
<code>Flambda2</code> provides a much extended alternative to Flambda, from <code>Lambda</code> IR to <code>CMM</code>.
</div>
</p>
</div>
</p>
<p>One obvious difference to notice is that <code>Flambda2</code> translates directly to <code>CMM</code>,
circumventing the <code>Clambda</code> IR, allowing us to lift some limitations inherent
to <code>Clambda</code> itself.</p>
<p>Furthermore, we experimented after releasing <code>Flambda</code> with the aim to
incrementally improve and add new optimisations. We tried to improve its
internal representation and noticed that we could gain a lot by doing so, but
also that it required deeper changes and that is what led us to <code>Flambda2</code>.</p>
<h2>
<a id="roadmap" class="anchor"></a><a class="anchor-link" href="#roadmap">Snippets Roadmap</a>
</h2>
<p>This is but the zeroth snippet of the series. It aims at providing you with
history and context for <code>Flambda2</code>.</p>
<p>You can expect the rest of the snippets to alternate between deep dives into the
technical aspects of <code>Flambda2</code>, and user-facing descriptions of the new
optimisations that we enable.</p>
<h2>
<a id="listing" class="anchor"></a><a class="anchor-link" href="#listing">The F2S Series!</a>
</h2>
<ul>
<li>
<p><a href="/blog/2024_01_31_the_flambda2_snippets_1">Episode 1: Foundational Design Decisions in Flambda2</a></p>
<p>The first snippet covers the characteristics and benefits of a CPS-based
internal representation for the optimisation of the OCaml language. It was
already covered in part <a href="https://icfp23.sigplan.org/details/ocaml-2023-papers/8/Efficient-OCaml-compilation-with-Flambda-2">at the OCaml
Workshop</a>
in 2023 and we go deeper into the subject in these blog posts.</p>
</li>
<li>
<p><a href="/blog/2024_05_07_the_flambda2_snippets_2">Episode 2: Loopifying Tail-Recursive Functions</a></p>
<p><code>Loopify</code> is the first optimisation algorithm that we introduce in the <strong>F2S</strong>
series. In this post, we breakdown the concept of transforming tail-recursive
functions in the context of reducing memory allocations inside of the
<code>Flambda2</code> compiler. We start with giving broader context around
tail-recursion and tail-recursion optimisation before diving into how this
transformation is both simple and representative of the philosophy behind all
the optimisations conducted by the <code>Flambda2</code> compiler.</p>
</li>
<li>
<p><a href="/blog/2024_08_09_the_flambda2_snippets_3">Episode 3: Speculative Inlining</a></p>
<p>This article introduces <code>Speculative Inlining</code>, which is the name of the
algorithm responsible for computing and inlining optimised function code
inside of <code>Flambda2</code>. We cover how quickly we are faced with complex
questions with only heuristic answers when it comes down to an optimal
inlining choice. <code>Speculative Inlining</code> is also the best demonstration of how
we traverse code in our compilation pipeline.</p>
</li>
<li>
<p>Episode 4: Upward and Downward Traversals</p>
<blockquote>
<p>Coming soon...</p>
</blockquote>
</li>
</ul>
<p>Stay tuned, and thank you for reading!</p>
Lean 4: When Sound Programs become a Choicehttps://ocamlpro.com/blog/2024_03_07_lean4_when_sound_programs_become_a_choice2024-03-07T08:12:13Z2024-03-07T08:12:13Z
Adrien Champion
Dario Pinto
Monitoring Edge Technical Endeavours As a company specialized in strongly-typed programming languages with strong static guarantees, OCamlPro closely monitors the ongoing trend of bringing more and more of these elements into mainstream programming languages. Rust is a relatively recent example of t...<h2>
<a id="watch" class="anchor"></a><a class="anchor-link" href="#watch">Monitoring Edge Technical Endeavours</a>
</h2>
<p>As a company specialized in strongly-typed programming languages with strong
static guarantees, OCamlPro closely monitors the ongoing trend of bringing more
and more of these elements into mainstream programming languages. Rust is a
relatively recent example of this trend; another one is the very recent <a href="https://leanprover-community.github.io/index.html">Lean 4
language</a>.</p>
<p><div id="tableofcontents">
<strong>Table of contents</strong></p>
<ul>
<li><a href="#watch">Monitoring Edge Technical Software</a>
</li>
<li><a href="#lean4">Lean 4, the Promise of Proven Software</a>
</li>
<li><a href="#leanpro">OCamlPro for a Future of Trustworthy Software</a>
</div>
</li>
</ul>
<h3>
<a id="lean4" class="anchor"></a><a class="anchor-link" href="#lean4">Lean 4, the Promising Future of Proven Software</a>
</h3>
<p>Lean 4 builds on the shoulders of giants like the Coq proof assistant, and
languages such as OCaml and Haskell, to put programmers in a world where they
can write elegant programs, express their specification with the full power of
modern logics, and prove that these programs are correct with respect to their
specification. Doing all this in the same language is crucial as it can
streamline the certification process: once Lean 4 is trusted (audits,
certification...), then programs, specifications, and proofs are also trusted.
This contrasts with having a programming language, a specification language,
and a separate verification/certification tool, and then having to argue about
the trustworthiness of each of them, and that the glue linking all of them
together makes sense. This is extremely interesting in the context of critical
embedded systems in particular, and in qualified/certified "high-trust"
development in general.</p>
<p>While admittedly not as mainstream as Rust, Lean 4 has recently seen an
explosion in interest from the media, developers, mathematicians, and (some)
industrials. Quanta now <a href="https://www.quantamagazine.org/tag/computer-assisted-proofs">routinely publishes articles about/mentioning Lean
4</a>; Fields medalist Terry Tao is increasingly vocal about (and
productive with) its use of Lean 4, see <a href="https://terrytao.wordpress.com/2023/11/18/formalizing-the-proof-of-pfr-in-lean4-using-blueprint-a-short-tour">here</a> and <a href="https://terrytao.wordpress.com/2023/12/05/a-slightly-longer-lean-4-proof-tour">here</a> for (very
technical) example(s). On the industrial side, Leonardo de Moura (Lean 4's lead
designer) recently went from a position at Microsoft Research to Amazon Web
Service, which was followed by a fast and still ongoing expansion of the
infrastructure around Lean 4.</p>
<h3>
<a id="leanpro" class="anchor"></a><a class="anchor-link" href="#leanpro">Pushing for a Future of Trustworthy Software</a>
</h3>
<p>OCamlPro has been closely monitoring Lean 4's progress by regularly developing
in-house prototypes in Lean 4. Getting involved in the community and Lean 4's
development effort is also part of our culture. This is to give back to the
community, but also to closely follow the evolution of Lean 4 and sharpen our
skills.</p>
<p>There are a few notable and public examples of our involvement. As part of our
in-house prototyping, we discovered a <a href="https://leanprover.zulipchat.com/#narrow/stream/270676-lean4/topic/case.20in.20dependent.20match.20not.20triggering.20.28.3F.29/near/288328239">"major bug" in Lean 4's dependent
pattern-matching</a>; later, we contributed on <a href="https://github.com/leanprover/lean4/pull/1811">improving aspects of the
by notation</a> (used to construct proofs), which then ricocheted into
<a href="https://github.com/leanprover/lean4/pull/1844">fixing problems into the calc tactic</a>. More recently, we contributed
on various fronts such as <a href="https://github.com/leanprover/lean4/issues/2988">improving the ecosystem's ergonomics</a>,
<a href="https://github.com/leanprover/std4/pull/233">adding useful lemmas to Lean 4's standard library</a>, <a href="https://github.com/leanprover/lean4/pull/2167">contributing to
the documentation effort</a>...</p>
<p>Lean 4 is not of industrial-strength yet, but it gets closer and closer.
Quickly enough for us to think that now's a reasonable time to spend some time
exploring it.</p>
Opam 101: The First Stepshttps://ocamlpro.com/blog/2024_01_23_opam_101_the_first_steps2024-01-23T08:12:13Z2024-01-23T08:12:13Z
Dario Pinto
Raja Boujbel
Welcome, dear reader, to a new series of blog posts! This series will be about everything opam. Each article will cover a specific aspect of the package manager, and make sure to dissipate any confusion or misunderstandings on this keystone of the OCaml distribution! Each technical article will be t...<p></p>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/opam-banniere-e1600868011587.png">
<img alt="Opam is like a magic box that allows people to be tidy when they share their work with the world, thus making the environment stable and predictable for everybody!" src="/blog/assets/img/opam-banniere-e1600868011587.png"/>
</a>
<div class="caption">
Opam is like a magic box that allows people to be tidy when they share their work with the world, thus making the environment stable and predictable for everybody!
</div>
</p>
</div>
</p>
<p>Welcome, dear reader, to a new series of blog posts!</p>
<p>This series will be about everything <code>opam</code>. Each article will cover a specific
aspect of the package manager, and make sure to dissipate any confusion or
misunderstandings on this keystone of the OCaml distribution!</p>
<p>Each technical article will be tailored for specific levels of engineering --
everyone, be they beginners, intermediate or advanced in the <em>OCaml Arts</em> will
find answers to some questions about <code>opam</code> right here.</p>
<blockquote>
<p>Checkout each article's <code>tags</code> to get an idea of the entry level required for
the smoothest read possible!</p>
</blockquote>
<p><div id="tableofcontents">
<strong>Table of contents</strong></p>
<ul>
<li><a href="#onboarding">Walking the path of opam, treading on solid ground</a>
</li>
<li><a href="#install">First step: installing opam</a>
</li>
<li><a href="#opaminit">Second step: initialisation</a>
</li>
<li><a href="#opamenv">Acclimating to the environment</a>
</li>
<li><a href="#switch">Switches, tailoring your workspace to your vision</a>
<ul>
<li><a href="#createaswitch">Creating a global switch</a>
</li>
<li><a href="#switchlocal">Creating a local switch</a>
</li>
</ul>
</li>
<li><a href="#opamrepo">The official opam-repository, the safe for all your packages</a>
</li>
<li><a href="#packages">Installing packages in your current switch</a>
</li>
<li><a href="#conclusion">Conclusion</a>
</div>
</li>
</ul>
<blockquote>
<p>New to the expansive OCaml sphere? As said on the official opam website,
<a href="https://opam.ocaml.org/about.html#A-little-bit-of-History">opam</a> has been a
game changer for the OCaml distribution, since it first saw the light of day
here, almost a decade ago.</p>
</blockquote>
<h2>
<a id="onboarding" class="anchor"></a><a class="anchor-link" href="#onboarding">Walking the path of opam, treading on solid ground</a>
</h2>
<p>We are aware that it can be quite a daunting task to get on-board with the
OCaml distribution. Be it because of its decentralised characteristics:
plethora of different tools, a variety of sometimes clashing <em>modi operandi</em>
and practices, usually poorly documented edge use-cases, the variety of ways to
go about having a working environment or many a different reason...</p>
<p>We have been thinking about making it easier for everyone, even the more
confirmed Cameleers, by releasing a set of blogposts progressively detailing
the depths at which <code>opam</code> can go.</p>
<p>Be sure to read these articles from the start if you are new to the beautiful
world of OCaml and, if you are already familiar, use it as a trust-worthy
documentation on speed-dial... You never know when you will have to setup an
opam installation while off-the-grid, do you ?</p>
<p>Are you ready to dive in ?</p>
<h2>
<a id="install" class="anchor"></a><a class="anchor-link" href="#install">First step: installing opam</a>
</h2>
<p>First, let's talk about installing opam.</p>
<blockquote>
<p>DISCLAIMER:
In this tutorial, we will only be addressing a fresh install of <code>opam</code> on
Linux and Mac. For more information about a Windows installation, stay tuned
with this blog!</p>
</blockquote>
<p>One would expect to have to interact with the package manager of one's
favourite distribution in order to install <code>opam</code>, and, to some extent, one
would be correct. However, we cannot guarantee that the version of opam you
have at your disposal through these means is indeed the one expected by this
tutorial, and every subsequent one for that matter.</p>
<p>You can check that <a href="https://opam.ocaml.org/doc/Distribution.html">here</a>, make
sure the version available to you is <code>2.1.5</code> or above.</p>
<p>Thus, in order for us to guarantee that we are on the same version, we will use
the installation method found <a href="https://opam.ocaml.org/doc/Install.html">here</a>
and add an option to specify the version of opam we will be working with from
now on.</p>
<p>Note that if you <strong>don't</strong> add the <code>--version 2.1.5</code> option to the following
command line, the script will download and install the <strong>latest</strong> opam release.
The cli of <code>opam</code> is made to remain consistent between versions so, unless you
have a very old version, or if you read this article in the very distant
future, you should not have problems by not using the <strong>exact</strong> same version as
we do. For the sake of consistency though, I will use this specific version.</p>
<pre><code class="language-shell-session">$ bash -c "sh <(curl -fsSL https://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh) --version 2.1.5"
</code></pre>
<p>This script will download the necessary binaries for a proper installation of
<code>opam</code>. Once done, you can move on to the nitty gritty of having a working
<code>opam</code> environment with <code>opam init</code>.</p>
<h2>
<a id="opaminit" class="anchor"></a><a class="anchor-link" href="#opaminit">Second step: initialisation</a>
</h2>
<p>The first command to launch, after the initial <code>opam</code> binaries have been
downloaded and <code>opam</code> has been installed on your system, is <code>opam init</code>.</p>
<p>This is when you step into the OCaml distribution for the first time.</p>
<p><code>opam init</code> does several crucial things for you when you launch it, and the
rest of this article will detail what exactly these crucial things are and what
they mean:</p>
<ul>
<li>it checks some required and recommended tools;
</li>
<li>it syncs with the official OCaml <strong>opam-repository</strong>, which you can find
<a href="https://github.com/ocaml/opam-repository">here</a>;
</li>
<li>it sets up the <strong>opam environment</strong> in your <code>*rc</code> files;
</li>
<li>it creates a <strong>switch</strong> and installs an <strong>ocaml-compiler</strong> for you;
</li>
</ul>
<p>Lets take a step-by-step look at the output of that command:</p>
<pre><code class="language-shell-session">$ opam init
No configuration file found, using built-in defaults.
Checking for available remotes: rsync and local, git, mercurial, darcs.
Perfect!
<><> Fetching repository information ><><><><><><><><><><><><><><><><>
[default] Initialised
<><> Required setup - please read <><><><><><><><><><><><><><><><><><>
In normal operation, opam only alters files within ~/.opam.
However, to best integrate with your system, some environment
variables should be set. If you allow it to, this initialisation
step will update your bash configuration by adding the following
line to ~/.profile:
test -r ~/.opam/opam-init/init.sh && . ~/.opam/opam-init/init.sh > /dev/null 2> /dev/null || true
Otherwise, every time you want to access your opam installation,
you will need to run:
eval $(opam env)
You can always re-run this setup with 'opam init' later.
Do you want opam to modify ~/.profile? [N/y/f]
(default is 'no', use 'f' to choose a different file) y
User configuration:
Updating ~/.profile.
[NOTE] Make sure that ~/.profile is well sourced in your ~/.bashrc.
<><> Creating initial switch 'default' (invariant ["ocaml" {>= "4.05.0"}] - initially with ocaml-base-compiler)
<><> Installing new switch packages <><><><><><><><><><><><><><><><><>
Switch invariant: ["ocaml" {>= "4.05.0"}]
<><> Processing actions <><><><><><><><><><><><><><><><><><><><><><><>
∗ installed base-bigarray.base
∗ installed base-threads.base
∗ installed base-unix.base
∗ installed ocaml-options-vanilla.1
⬇ retrieved ocaml-base-compiler.5.1.0 (https://opam.ocaml.org/cache)
∗ installed ocaml-base-compiler.5.1.0
∗ installed ocaml-config.3
∗ installed ocaml.5.1.0
∗ installed base-domains.base
∗ installed base-nnp.base
Done.
</code></pre>
<p>The main result for an <code>opam init</code> call is to setup what is called your <code>opam root</code>. It does so by creating a <code>~/.opam</code> directory to operate inside of.
<code>opam</code> modifies and writes in this location <strong>only</strong> as a default.</p>
<hr />
<p>First, <code>opam</code> checks that there is at least one required tool for syncing to
the <code>opam-repository</code>. Then it checks what backends are available in your
system. Here all are available: <code>rsync, git, mercurial, and darcs</code>. They will
be used to sync repositories or packages.</p>
<pre><code class="language-shell-session">$ opam init
No configuration file found, using built-in defaults.
Checking for available remotes: rsync and local, git, mercurial, darcs.
Perfect!
</code></pre>
<p>Then, <code>opam</code> fetches the default opam repository: <code>opam.ocaml.org</code>.</p>
<pre><code class="language-shell-session"><><> Fetching repository information ><><><><><><><><><><><><><><><><>
[default] Initialised
</code></pre>
<hr />
<p>Secondly, <code>opam</code> requires your input in order to configure your shell for the
smoothest possible experience. For more details about the opam environment,
refer to the next section.</p>
<blockquote>
<p>Something interesting to remember for later is, in the excerpt below, we
grant opam with the permission to edit the <code>~/.profile</code> file. This part of
the Quality of Life features for an everyday use an <code>opam</code> environment and we
will detail how so below.</p>
</blockquote>
<pre><code class="language-shell-session"><><> Required setup - please read <><><><><><><><><><><><><><><><><><>
In normal operation, opam only alters files within ~/.opam.
However, to best integrate with your system, some environment
variables should be set. If you allow it to, this initialisation
step will update your bash configuration by adding the following
line to ~/.profile:
test -r ~/.opam/opam-init/init.sh && . ~/.opam/opam-init/init.sh > /dev/null 2> /dev/null || true
Otherwise, every time you want to access your opam installation,
you will need to run:
eval $(opam env)
You can always re-run this setup with 'opam init' later.
Do you want opam to modify ~/.profile? [N/y/f]
(default is 'no', use 'f' to choose a different file) y
User configuration:
Updating ~/.profile.
[NOTE] Make sure that ~/.profile is well sourced in your ~/.bashrc.
</code></pre>
<hr />
<p>The next action is the installation of your very first <code>switch</code> alongside a
version of the OCaml compiler, by default a compiler >= <code>4.05.0</code> to be exact.</p>
<p>For more information about what is a <code>switch</code> be sure to read <a href="#switch">the rest of the
article</a>.</p>
<pre><code class="language-shell-session"><><> Creating initial switch 'default' (invariant ["ocaml" {>= "4.05.0"}] - initially with ocaml-base-compiler)
<><> Installing new switch packages <><><><><><><><><><><><><><><><><>
Switch invariant: ["ocaml" {>= "4.05.0"}]
<><> Processing actions <><><><><><><><><><><><><><><><><><><><><><><>
∗ installed base-bigarray.base
∗ installed base-threads.base
∗ installed base-unix.base
∗ installed ocaml-options-vanilla.1
⬇ retrieved ocaml-base-compiler.5.1.0 (https://opam.ocaml.org/cache)
∗ installed ocaml-base-compiler.5.1.0
∗ installed ocaml-config.3
∗ installed ocaml.5.1.0
∗ installed base-domains.base
∗ installed base-nnp.base
Done.
</code></pre>
<p><strong>Great! So let's focus on the actions performed by the <code>opam init</code> call!</strong></p>
<h2>
<a id="opamenv" class="anchor"></a><a class="anchor-link" href="#opamenv">Acclimating to the environment</a>
</h2>
<p>Well, as said previously, the first action was to setup an <code>opam root</code> in your
<code>$HOME</code> directory, (i.e: <code>~/.opam</code>). This is where <code>opam</code> will operate. <code>opam</code>
will never modify other locations in your filesystem without notifying you
first.</p>
<p>An <code>opam</code> root is made to resemble a linux-like architecture. You will find
inside it directories such as <code>/usr</code>, <code>/etc</code>, <code>/bin</code> and so on. This is by
default where <code>opam</code> will store everything relative to your system-wide
installation. Config files, packages and their configurations, and also
binaries.</p>
<p>This leads us to the need for an <code>eval $(opam env)</code> call.</p>
<p>Indeed, in order to make your binaries and such accessible as system-wide
tools, you need to update all the relevant environment variables (<code>PATH</code>,
<code>MANPATH</code>, etc.) with all the locations for all of your everyday OCaml tools.</p>
<p>To see what variables are exported when evaluating the <code>opam env</code> command, you
can check the following codeblock:</p>
<pre><code class="language-shell-session">$ opam env
OPAM_SWITCH_PREFIX='~/.opam/default'; export OPAM_SWITCH_PREFIX;
CAML_LD_LIBRARY_PATH='~/.opam/default/lib/stublibs:~/.opam/default/lib/ocaml/stublibs:~/.opam/default/lib/ocaml'; export CAML_LD_LIBRARY_PATH;
OCAML_TOPLEVEL_PATH='~/.opam/default/lib/toplevel'; export OCAML_TOPLEVEL_PATH;
MANPATH=':~/.opam/default/man'; export MANPATH;
PATH='~/.opam/default/bin:$PATH'; export PATH;
</code></pre>
<p>Remember when we granted <code>opam init</code> with the permission to edit the <code>~/.profile</code>
file, earlier in this tutorial ? That comes in handy now: it keeps us from
having to use the <code>eval $(opam env)</code> more than necessary.</p>
<p>Indeed, you would otherwise have to call it every time you launch a new shell
among other things. What it does instead is adding hook at prompt level that
keeps <code>opam</code> environment synced, updating it every time the user presses
<code>Enter</code>. Very handy indeed.</p>
<h2>
<a id="switch" class="anchor"></a><a class="anchor-link" href="#switch">Switches, tailoring your workspace to your vision</a>
</h2>
<p>The second task accomplished by <code>opam init</code> was installing the first <code>switch</code>
inside your fresh installation.</p>
<p>A <code>switch</code> is one of opam's core operational concepts, it's definition can vary
depending on your exact use-case but in the case of OCaml, a <code>switch</code> is a
<strong>named pair</strong>:</p>
<ul>
<li>an arbitrary version of the OCaml compiler
</li>
<li>a list of packages available for that specific version of the compiler.
</li>
</ul>
<p>In our example, we see that the only packages installed in the process were the
dependencies for the OCaml compiler version <code>5.1.0</code> inside the <code>switch</code> named
<code>default</code>.</p>
<pre><code class="language-shell-session"><><> Creating initial switch 'default' (invariant ["ocaml" {>= "4.05.0"}] - initially with ocaml-base-compiler)
<><> Installing new switch packages <><><><><><><><><><><><><><><><><>
Switch invariant: ["ocaml" {>= "4.05.0"}]
<><> Processing actions <><><><><><><><><><><><><><><><><><><><><><><>
∗ installed base-bigarray.base
∗ installed base-threads.base
∗ installed base-unix.base
∗ installed ocaml-options-vanilla.1
⬇ retrieved ocaml-base-compiler.5.1.0 (https://opam.ocaml.org/cache)
∗ installed ocaml-base-compiler.5.1.0
∗ installed ocaml-config.3
∗ installed ocaml.5.1.0
∗ installed base-domains.base
∗ installed base-nnp.base
Done.
</code></pre>
<p>You can create an arbitrary amount of parallel <code>switches</code> in opam. This allows
users to manage parallel, independent OCaml environments for their
developments.</p>
<p>There are two types of <code>switches</code>:</p>
<ul>
<li><code>global switches</code> have their packages, binaries and tools available
anywhere on your computer. They are useful when you consider a given <code>switch</code>
to be your default and most adequate environment for your everyday use of
<code>opam</code> and OCaml.
</li>
<li><code>local switches</code> on the other hand are only available in a given directory.
Their packages and binaries are local to that <strong>specific</strong> directory. This
allows users to make specific projects have their own self-contained
working environments. The local switch is automatically selected by <code>opam</code> as
the current one when you are located inside the appropriate directory. More
details on local switches below.
</li>
</ul>
<p>The default behaviour for <code>opam</code> when creating a <code>switch</code> at init-time is to
make it global and name it <code>default</code>.</p>
<pre><code class="language-shell-session">$ opam switch show
default
$ opam switch
# switch compiler description
→ default ocaml.5.1.0 default
</code></pre>
<p>Now that you have a general understanding of what exactly is a <code>switch</code> and how
it is used, let's get into how you can go about manually creating your first
<code>switch</code>.</p>
<h3>
<a id="createaswitch" class="anchor"></a><a class="anchor-link" href="#createaswitch">Creating a global switch</a>
</h3>
<blockquote>
<p>NB:
Remember that <code>opam</code>'s command-line interface is beginner friendly. You can,
at any point of your exploration, use the <code>--help</code> option to have every
command and subcommand explained. You may also checkout the <a href="https://ocamlpro.github.io/ocaml-cheat-sheets/ocaml-opam.pdf">opam
cheat-sheet</a> that was released a while ago and might still hold some
precious insights on opam's cli.</p>
</blockquote>
<p>So how does one create a <code>switch</code> ? The short answer is bafflingly
straightforward:</p>
<pre><code class="language-shell-session"># Installs a switch named "my-switch" based OCaml compiler version > 4.05.0
# Here 4.05 is the default lower compiler version opam selects when unspecified
$ opam switch create my-switch
</code></pre>
<p>Easy, right? Now let's imagine that you would like to specify a <strong>later</strong> version
of the OCaml compiler. The first thing you would want to know is which version
are available for you to specify, and you can use <code>opam list</code> for that.</p>
<p>Other commands can be used to the same effect but we prefer introducing you to
this specific one as it may also be used for any other package available via
<code>opam</code>.</p>
<p>So, as for any other package than <code>ocaml</code> itself, <code>opam list</code> will give you all
available versions of that package for your currently active <code>switch</code>. Since we
don't yet have an OCaml compiler installed, it will list all of them so that we
may pick and choose our favourite to use for the <code>switch</code> we are making.</p>
<pre><code class="language-shell-session">$ opam list ocaml
# Packages matching: name-match(ocaml) & (installed | available)
# Package # Installed # Synopsis
ocaml.3.07 -- The OCaml compiler (virtual package)
ocaml.3.07+1 -- The OCaml compiler (virtual package)
ocaml.3.07+2 -- The OCaml compiler (virtual package)
ocaml.3.08.0 -- The OCaml compiler (virtual package)
(...)
ocaml.4.13.1 -- The OCaml compiler (virtual package)
ocaml.4.13.2 -- The OCaml compiler (virtual package)
(...)
ocaml.5.2.0 -- The OCaml compiler (virtual package)
</code></pre>
<p>Let's use it for a switch:</p>
<pre><code class="language-shell-session"># Installs a switch named "my-switch" based OCaml compiler version = 4.13.1
$ opam switch create my-switch ocaml.4.13.1
</code></pre>
<p>That's it, for the first time, you have manually created your own <code>global switch</code> tailored to your specific needs, congratulations!</p>
<blockquote>
<p>NB:
Creating a switch can be a fairly time-consuming task depending on whether or
not the compiler version you have queried from <code>opam</code> is already installed on
your machine, typically in a previously created <code>switch</code>.
Every time you ask <code>opam</code> to install a version of the compiler, it will first
scour your installation for a locally available version of that compiler to
save you the time necessary for downloading, compiling and installing a brand
new one.</p>
</blockquote>
<p>Now, onto <code>local switches</code>.</p>
<h3>
<a id="switchlocal" class="anchor"></a><a class="anchor-link" href="#switchlocal">Creating a local switch</a>
</h3>
<p>As said previously, the use of a <code>local switch</code> is to constrain a specific
OCaml environment to a specific location on your workstation.</p>
<p>Let's imagine you are about to start a new development called <code>my-project</code>.</p>
<p>While preparing all necessary pre-requisites for it, you notice something
problematic: your global <code>default</code> switch is drastically incompatible with the
dependencies of your project.
In this imaginary situation, you have a <code>default</code> global switch that is useful
for most of your other tasks but now have only one project that differs from
your usual usage of OCaml.</p>
<p>To remedy this situation, you could go about creating another global switch
for your upcoming dev requirements on <code>my-project</code> and proceed to install all
relevant packages and remake a full <code>switch</code> from scratch for that specific
project. However this would require you to always keep track of which one is
your currently active <code>switch</code>, while possibly having to regularly oscillate
between your global <code>default</code> switch and your alternative global <code>my-project</code>
switch which you could understandably find to be suboptimal and tedious to
incorporate to your workflow on the long run.</p>
<p>That's when <code>local switches</code> come in handy because they allow you to leave the
rest of your OCaml dev environment unaffected by whatever out-of-bounds or
specific workload you're undertaking. Additionally, the fact that <code>opam</code>
automatically selects your <code>local switch</code> as your current active one as soon as
you step inside the relevant directory makes the developers's context switch
seemless.</p>
<p>Let's examine how you can create such a <code>switch</code>:</p>
<pre><code class="language-shell-session"># Hop inside the directory of your project
$ cd my-project
# We consider your project already has an opam file describing only
# its main dependency: ocaml.4.14.1
$ opam switch create .
<><> Installing new switch packages <><><><><><><><><><><><><><><><><>
Switch invariant: ["ocaml" {>= "4.05.0"}]
<><> Processing actions <><><><><><><><><><><><><><><><><><><><><><><>
∗ installed base-bigarray.base
∗ installed base-threads.base
∗ installed base-unix.base
∗ installed ocaml-system.4.14.1
∗ installed ocaml-config.2
∗ installed ocaml.4.14.1
Done.
$ opam switch
# switch compiler description
→ /home/ocp/my-project ocaml.4.14.1 /home/ocp/my-project
default ocaml.5.1.0 default
my-switch ocaml.4.13.1 my-switch
[NOTE] Current switch has been selected based on the current directory.
The current global system switch is default.
</code></pre>
<p>Here it is, you can now hop into your local switch <code>/home/ocp/my-project</code>
whenever you have time to deviate from your global environment.</p>
<h2>
<a id="opamrepo" class="anchor"></a><a class="anchor-link" href="#opamrepo">The official opam-repository, the safe for all your packages</a>
</h2>
<p>Among all the things that <code>opam init</code> did when it was executed, there is still
one detail we have yet to explain and that's the first action of the process:
retrieving packages specification from the official OCaml <code>opam-repository</code>.</p>
<p>Explaining what exactly an <code>opam-repository</code> is requires the recipient to have
a slightly deeper understanding of how <code>opam</code> works than the average reader
this article was written for might have; so you will have to wait for us to go
deeper into that subject in another blogpost when the time is ripe.</p>
<p>What we <strong>will</strong> do now though is explain what the official OCaml
<code>opam-repository</code> is and how it relates to our use of <code>opam</code> in this blog post.</p>
<p><a href="https://github.com/ocaml/opam-repository">The Official OCaml opam-repository</a>
is an open-source project where all released software of the OCaml
distributions are <strong>referenced</strong>. It holds different compilers, basic tools,
thousands of libraries, approximatively 4500 packages in total as of today and
is configured to be the default repository for <code>opam</code> to sync to. You may add
your own repositories for your own use of <code>opam</code>, but again, that's a subject
for another time.</p>
<p>In case the repository itself is not what you are looking for, know that all
packages available throughout the entire OCaml distribution may be browsed
directly on <a href="https://ocaml.org/packages">ocaml.org</a>.</p>
<p>It is essentially a collection of <code>opam packages</code> described in <code>opam file</code>
format. Checkout <a href="https://opam.ocaml.org/doc/Manual.html#opam">the manual</a> for
more information about the <code>opam file</code> format.</p>
<p>A short explanation for it is that an <code>opam package</code> file holds every
information necessary for <code>opam</code> to operate and provide. The file lists all of
the packages direct dependencies, where to find its source code, the names and
emails of maintainers and authors, different checksums for each archive release and the
list goes on.</p>
<p>Here's a quick example for you to have an idea of what it looks like:</p>
<pre><code class="language-shell-session">opam-version: "2.0"
synopsis: "OCaml bindings to Zulip API"
maintainer: ["Dario Pinto <dario.pinto@ocamlpro.com>"]
authors: ["Mohamed Hernouf <mohamed.hernouf@ocamlpro.com>"]
license: "LGPL-2.1-only WITH OCaml-LGPL-linking-exception"
homepage: "https://github.com/OCamlPro/ozulip"
doc: "https://ocamlpro.github.io/ozulip"
bug-reports: "https://github.com/OCamlPro/ozulip/issues"
dev-repo: "git+https://github.com/OCamlPro/ozulip.git"
tags: ["zulip" "bindings" "api"]
depends: [
"ocaml" {>= "4.10"}
"dune" {>= "2.0"}
"ez_api" {>= "2.0.0"}
"re"
"base64"
"json-data-encoding" {>= "1.0.0"}
"logs"
"lwt" {>= "5.4.0"}
"ez_file" {>= "0.3.0"}
"cohttp-lwt-unix"
"yojson"
"logs"
]
build: [ "dune" "build" "-p" name "-j" jobs "@install" ]
url {
src: "https://github.com/OCamlPro/ozulip/archive/refs/tags/0.1.tar.gz"
checksum: [
"md5=4173fefee440773dd0f8d7db5a2e01e5"
"sha512=cb53870eb8d41f53cf6de636d060fe1eee6c39f7c812eacb803b33f9998242bfb12798d4922e7633aa3035cf2ab98018987b380fb3f380f80d7270e56359c5d8"
]
}
</code></pre>
<p>Okay so now, how do we go about populating a <code>switch</code> with packages and really get started?</p>
<h2>
<a id="packages" class="anchor"></a><a class="anchor-link" href="#packages">Installing packages in your current switch</a>
</h2>
<p>It's elementary. This simple command will do the trick of <em>trying</em> to install a
package, <strong>and its dependencies</strong>, in your currently active <code>switch</code>.</p>
<pre><code class="language-shell-session">$ opam install my-package
</code></pre>
<p>I say <em>trying</em> because <code>opam</code> will notify you if the current package version
and its dependencies you are querying are or not compatible with the current
state of your <code>switch</code>. It will also offer you solutions for the compatibility
constraints between packages to be satisfiable: it may suggest to upgrade some
of your packages, or even to remove them entirely.</p>
<p>The key thing about this process is that <code>opam</code> is designed to solve
compatibility constraints in the global graph of dependencies that the OCaml
packages form. This design is what makes <code>opam</code> the average Cameleer's best
friend. It will highlight inconsistencies within dependencies, it will figure
out a way for your specific query to be satisfiable somehow and save you <strong>a
lot</strong> of headscratching, that is, if you are willing to accommodate a bit of
<em>getting-used to</em>.</p>
<p>The next command allows you to uninstall a package from your currently active
<code>switch</code> <strong>as well as</strong> the packages that depend on it:</p>
<pre><code class="language-shell-session">$ opam remove my-package
</code></pre>
<p>And the two following will <code>update</code> the state of the repositories <code>opam</code> is
synchronised with and <code>upgrade</code> the packages installed while <strong>always</strong> keeping
package compatibility in mind.</p>
<pre><code class="language-shell-session">$ opam update
$ opam upgrade
</code></pre>
<h2>
<a id="conclusion" class="anchor"></a><a class="anchor-link" href="#conclusion">Conclusion</a>
</h2>
<p>Here it is, you should now be knowledgeable enough about <code>opam</code> to jump right in
the OCaml discovery!</p>
<p>Today we learned everything elementary about <code>opam</code>.</p>
<p>From installation, to initialisation and explanations about the core concepts
of the <code>opam</code> environment, <code>switches</code>, packages and the Official OCaml
<code>opam-repository</code>.</p>
<p>Be sure to stay tuned with our blog, the journey into the rabbit hole has only
started and <code>opam</code> is a deep one indeed!</p>
<hr />
<p>Thank you for reading,</p>
<p>From 2011, with love,</p>
<p>The OCamlPro Team</p>
Maturing Learn-OCaml to version 1.0: Gateway to the OCaml Worldhttps://ocamlpro.com/blog/2023_12_13_learn_ocaml_gateway_to_the_ocaml_world2023-12-13T08:12:13Z2023-12-13T08:12:13Z
Dario Pinto
From the very start OCamlPro has been trying to help ease the learning of the OCaml language. OCaml has been used around the world to teach about a variety of Computer Science domains, from algorithmic to calculus, or functional programming and compilation. The language had been long taught in Acade...<p></p>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/dalle_camel_on_the_road.png">
<img alt="Camels are known to be able to walk long distances. They have adapted to an inhospitable environment and help Humanity daily." src="/blog/assets/img/dalle_camel_on_the_road.png"/>
</a>
<div class="caption">
Camels are known to be able to walk long distances. They have adapted to an inhospitable environment and help Humanity daily.
</div>
</p>
</div>
</p>
<p>From the very start OCamlPro has been trying to help ease the learning of the
OCaml language. OCaml has been used around the world to teach about a variety
of Computer Science domains, from algorithmic to calculus, or functional
programming and compilation.</p>
<p>The language had been long taught in Academia when arose initiatives to offer
simple web tools to write and compile OCaml code in a simple web browser. We
launched the <a href="https://try.ocaml.pro/">TryOCaml</a> web editor for OCaml all the
way back in 2012. We were then appointed in 2015 by Roberto Di Cosmo from the
French University Paris-Diderot, to create the <a href="https://www.fun-mooc.fr/fr/cours/introduction-to-functional-programming-in-ocaml/">OCaml FUN
MOOC</a>
platform - and helped write the exercises used as pedagogical resources for the
<code>Introduction to Functional Programming</code>.</p>
<p>That is how the <a href="https://github.com/ocaml-sf/learn-ocaml">Learn-OCaml open source learning platform</a> was born, created
then maintained at OCamlPro until 2018. Its steering was then transferred to
the <a href="http://ocaml-sf.org/actions/">OCaml Software Foundation</a> in 2019 and the
project steadily grew into a fully fledged tool used by teachers and students
around the world to this day.</p>
<p><em>Kudos to all OCaml teachers around the world, and to the LearnOCaml team, shepherded by Louis Gesbert</em></p>
<h2>
<a id="loc" class="anchor"></a><a class="anchor-link" href="#loc">Learn-OCaml v1.0</a>
</h2>
<blockquote>
<p><strong>What is Learn-OCaml today?</strong></p>
<p><a href="https://github.com/ocaml-sf/learn-ocaml">Learn-OCaml</a> is a web platform for
orchestrating exercises for OCaml programming, with automated grading. The
interface features a code editor and client-side evaluation and grading; it
can be served statically, but if running the bundled server there are also
server-side saves, facilities for teachers to follow the progress of
students,
give assignments, get grades, etc.</p>
<p>We are thrilled to announce that the steady work that has been accomplished
over the years on <code>Learn-OCaml</code> is finally bearing its fruits in the form of a
long-awaited soon-to-be-released <code>v1.0</code>!</p>
</blockquote>
<hr />
<p>For all details relative to the upcoming <code>1.0</code> release, do refer to <a href="https://discuss.ocaml.org/t/learn-ocaml-1-0-approaching-call-for-testers/13621/1">Louis' post
on OCaml Discuss</a>.</p>
<p>For all historical intents and purposes, do refer to the <a href="https://files.ocamlpro.com/uploads/ocaml-2016-learn-ocaml.pdf">original 2016 OCaml
Workshop paper</a> on Learn-OCaml which kickstarted a long stream of updates and
improvements to the platform and its <a href="https://github.com/ocaml-sf/learn-ocaml-corpus">public corpus exercices</a>.</p>
<p><strong>The maintenance and development work on the platform is now funded by the OCaml Software Foundation.</strong></p>
The latest release of Alt-Ergo version 2.5.1 is out, with improved SMT-LIB and bitvector support!https://ocamlpro.com/blog/2023_09_18_release_of_alt_ergo_2_5_12023-09-18T08:12:13Z2023-09-18T08:12:13Z
Pierre Villemot
We are happy to announce a new release of Alt‑Ergo (version 2.5.1). Alt-Ergo is a cutting-edge automated prover designed specifically for mathematical formulas, with a primary focus on advancing program verification.
This powerful tool is instrumental in the arsenal of static analysis solutions su...<p>
<div class="figure">
<p>
<a href="/blog/assets/img/ae-251-is-out.png">
<img alt="Alt‑Ergo: An Automated SMT Solver for Program Verification" src="/blog/assets/img/ae-251-is-out.png"/>
</a>
<div class="caption">
Alt‑Ergo: An Automated SMT Solver for Program Verification
</div>
</p>
</div>
</p>
<p><strong>We are happy to announce a new release of Alt‑Ergo (version 2.5.1).</strong></p>
<blockquote>
<p>Alt-Ergo is a cutting-edge automated prover designed specifically for mathematical formulas, with a primary focus on advancing program verification.</p>
<p>This powerful tool is instrumental in the arsenal of static analysis solutions such as Trust-In-Soft Analyzer and Frama-C. It accompanies other major solvers like CVC5 and Z3, and is part of the solvers used behind Why3, a platform renowned for deductive program verification.</p>
<p><strong>Find out more about Alt‑Ergo and how to join the Alt-Ergo Users' Club <a href="https://alt-ergo.ocamlpro.com/#about">here</a>!</strong></p>
</blockquote>
<p>This release includes the following new features and improvements:</p>
<ul>
<li>support for bit-vectors in the SMT-LIB format;
</li>
<li>new SMT-LIB parser and typechecker;
</li>
<li>improved bit-vector reasoning;
</li>
<li>partial support for SMT-LIB commands <code>set-option</code> and <code>get-model</code>;
</li>
<li>simplified options to enable floating-point arithmetic theory;
</li>
<li>various bug fixes.
</li>
</ul>
<h3>Update for bug fixes</h3>
<p>Since writing this blog post, we have released Alt-Ergo version 2.5.2 which fixes an incorrect implementation of the <code>(distinct)</code> SMT-LIB operator when applied to more than two arguments, and a (rare) crash in model generation. We strongly advise users interested in SMT-LIB or model generation support upgrade to version 2.5.2 on OPAM.</p>
<h2>Better SMT-LIB Support</h2>
<p>This release includes a better support of the
<a href="https://smtlib.cs.uiowa.edu/papers/smt-lib-reference-v2.6-r2021-05-12.pdf">SMT-LIB standard v2.6</a>.
More precisely, the release contains:</p>
<ul>
<li>built-in primitives for the
<a href="https://smtlib.cs.uiowa.edu/theories-FixedSizeBitVectors.shtml">FixedSizeBitVectors</a>;
</li>
<li><a href="https://smtlib.cs.uiowa.edu/theories-Reals_Ints.shtml">Reals_Ints</a> theories
and the <a href="https://smtlib.cs.uiowa.edu/logics-all.shtml#QF_BV">QF_BV</a> logic;
</li>
<li>new fully-featured parsers and type-checkers for SMT-LIB and native Alt-Ergo languages;
</li>
<li>specific and meaningful messages for syntax and typing errors.
</li>
</ul>
<p>These features are powered by the
<a href="https://github.com/Gbury/dolmen">Dolmen Library</a> through
a new frontend alongside the legacy one. Dolmen, developed by our own Guillaume Bury,
is also used by the SMT community to check the conformity of the
<a href="https://smtlib.cs.uiowa.edu/benchmarks.shtml">SMT-LIB benchmarks</a>.</p>
<p><strong>Important</strong>:
In this release, the legacy frontend is still the default.
If you want to enable the new Dolmen frontend, use the option
<code>--frontend dolmen</code>. We encourage you to try it and report any bugs on our
<a href="https://github.com/OCamlPro/alt-ergo/issues">issue tracker</a>.</p>
<p><strong>Note</strong>: We plan to deprecate the legacy frontend and make Dolmen the default frontend in version <code>2.6.0</code>, and to fully remove the legacy frontend in version <code>2.7.0</code>.</p>
<h3>Support For Bit-Vectors Primitives</h3>
<p>Alt-Ergo has had support for bit-vectors in its native language for a long time,
but bit-vectors were not supported by the old SMT-LIB parser, and hence not
available in the SMT-LIB format. This has changed with the new Dolmen front-end,
and support for bit-vectors in the SMT-LIB format is now available starting with
Alt-Ergo 2.5.1!</p>
<p>The SMT-LIB theories for bit-vectors, <code>BV</code> and <code>QF_BV</code>, have more primitives than
those previously available in Alt-Ergo. Alt-Ergo 2.5.1 supports all the
primitives in the <code>BV</code> and <code>QF_BV</code> primitives when using the Dolmen frontend.
Alt-Ergo's reasoning capabilities on the new primitives are limited, and will
be gradually improved in future releases.</p>
<h3>Built-in Primitives For Mixed Integer And Real Problems</h3>
<p>In this release, we add the support for the
primitives <code>to_real</code>, <code>to_int</code> and <code>is_int</code> of the SMT-LIB theory
<a href="https://smtlib.cs.uiowa.edu/theories-Reals_Ints.shtml">Reals_Ints</a>.
Notice that the support is only avalaible through the Dolmen frontend.</p>
<h3>Example</h3>
<p>For instance, the input file <code>input.smt2</code>:</p>
<pre><code class="language-shell-session">(set-logic ALL)
(declare-const x Int)
(declare-const y Real)
(declare-fun f (Int Int) Int)
(assert (= (f x y) 0))
(check-sat)
</code></pre>
<p>with the command:</p>
<pre><code class="language-shell-session">alt-ergo --frontend dolmen input.smt2
</code></pre>
<p>produces the limpid error message:</p>
<pre><code class="language-shell-session">File "input.smt2", line 5, character 11-18:
5 | (assert (= (f x y) 0))
^^^^^^^
Error The term: `y` has type `real` but was expected to be of type `int`
</code></pre>
<h2>Model Generation</h2>
<p>Generating models (also known as counterexamples) is highly appreciated by
users of SMT-solvers. Indeed, most builtin theories in common SMT-solvers
are incomplete. As a consequence, solvers can fail to discharge goals and,
without models, the SMT-solver behaves as a black box by outputting laconic
answers: <code>sat</code>, <code>unsat</code> or <code>unknown</code>.</p>
<p>Providing best-effort counterexamples assists developers
to understand why the solver failed to validate goals. If the goal isn't valid,
the solver should, as much as it can, output a correct counter-example that helps
users while fixing their specifications. If the goal is actually valid, the
generated model is wrong but it can help SMT-solver's maintainers to understand
why their solver didn't manage to discharge the goal.</p>
<p>Model generation for <code>LIA</code> theory and <code>enum</code> theory is available in Alt-Ergo.
The feature for other theories is either in testing phase or being implemented.
If you run into wrong models, please report them on our
<a href="https://github.com/OCamlPro/alt-ergo/issues">Github repository</a>.</p>
<h3>Usage</h3>
<p>The present release provides convenient ways to invoking models.
Notice that we change model invocation since the post
<a href="https://ocamlpro.com/blog/2022_11_16_alt-ergo-models/">Alt-Ergo: the SMT solver with model generation</a>
about model generation on the <code>next</code> development branch.</p>
<p>Check out the <a href="https://ocamlpro.github.io/alt-ergo/Usage/index.html#generating-models">documentation</a> for more details.</p>
<h2>Floating Point Support</h2>
<p>In version 2.5.1, the options to enable support for unbounded floating-point
arithmetic have been simplified. The options <code>--use-fpa</code> and
<code>--prelude fpa-theory-2019-10-08-19h00.ae</code> are gone: floating-point arithmetic
is now treated as a built-in theory and can be enabled with
<code>--enable-theories fpa</code>. We plan on enabling support for the FPA theory by default
in a future release.</p>
<h3>Usage</h3>
<p>To turn on the <code>fpa</code> theory, use the new option <code>--enable-theory fpa</code> as follows:</p>
<pre><code class="language-shell-session">alt-ergo --enable-theory fpa input.smt2
</code></pre>
<h2>About Alt-Ergo 2.5.0</h2>
<p>Version 2.5.0 should not be used, as it contains a soundness bug with the
new <code>bvnot</code> primitive that slipped through the cracks. The bug was found
immediately after the release, and version 2.5.1 released with a fix.</p>
<h2>Acknowledgements</h2>
<p>We thank members of the <a href="https://alt-ergo.ocamlpro.com/#club">Alt-Ergo Users' Club</a>: Thales, Trust-in-Soft, AdaCore, MERCE and the CEA.</p>
<p>We specially thank David Mentré and Denis Cousineau at Mitsubishi Electric R&D
Center Europe for funding the initial work on model generation.
Note that MERCE has been a Member of the Alt-Ergo Users' Club for three years.
This partnership allowed Alt-Ergo to evolve and we hope that more users will join
the Club on our journey to make Alt-Ergo a must-have tool.</p>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/clubAE.png">
<img alt="The dedicated members of our Alt-Ergo Club!" src="/blog/assets/img/clubAE.png"/>
</a>
<div class="caption">
The dedicated members of our Alt-Ergo Club!
</div>
</p>
</div>
</p>
2022 at OCamlProhttps://ocamlpro.com/blog/2023_06_30_2022_at_ocamlpro2023-06-30T08:12:13Z2023-06-30T08:12:13Z
Dario Pinto
OCamlPro
For 12 years now, OCamlPro has been empowering a large range of customers, allowing them to harness state-of-the-art technologies and languages like OCaml and Rust. Our not-so-small-anymore company steadily grew into a team of highly-skilled and passionate engineers, experts in Computer Science, fro...<p>
<div class="figure">
<p>
<a href="/blog/assets/img/ocp_beach_2023.png">
<img alt="Clear skies on OCamlPro's way of life." src="/blog/assets/img/ocp_beach_2023.png"/>
</a>
<div class="caption">
Clear skies on OCamlPro's way of life.
</div>
</p>
</div>
</p>
<p>For 12 years now, OCamlPro has been empowering a large range of customers,
allowing them to harness state-of-the-art technologies and languages like OCaml
and Rust. Our not-so-small-anymore company steadily grew into a team of
highly-skilled and passionate engineers, experts in Computer Science, from
Compilation and Software Analysis to Domain Specific Languages design and
Formal Methods.</p>
<p>In this article, as every year (see <a href="https://ocamlpro.com/blog/2022_01_31_2021_at_ocamlpro/">last
year's post</a>) - albeit
later than we do usually, we review some of the work we did during 2022, in
many different worlds as shows the wide range of the missions we achieved.</p>
<p><div id="tableofcontents">
<strong>Table of contents</strong></p>
<p><a href="#people">Newcomers at OCamlPro</a></p>
<p><a href="#apps">Modernizing Core Parts of Real Life Applications</a></p>
<ul>
<li><a href="#mlang">MLANG, keystone of the French citizens' Income Tax Calculation</a>
</li>
<li><a href="#cobol">Contributing to GnuCOBOL, the Free Open-Source COBOL Alternative</a>
</li>
</ul>
<p><a href="#rust">Rust Expertise and Developments</a></p>
<ul>
<li><a href="#ecore">Ecore, a heart of Rust for EMF</a>
</li>
<li><a href="#osource">Open-Source Rust Contributions</a>
<ul>
<li><a href="#lean4">Contributions to Lean4 Language</a>
</li>
<li><a href="#matla">Matla, TLA+ Projects Manager</a>
</li>
<li><a href="#agnos">Agnos, for Let's Encrypt Wildcard Certificates</a>
</li>
</ul>
</li>
</ul>
<p><a href="#wasm">The WebAssembly Garbage Collection Working-Group</a></p>
<p><a href="#formal-methods">Tooling for Formal Methods</a></p>
<ul>
<li><a href="#prover">The Alt-Ergo Theorem Prover</a>
<ul>
<li><a href="#club">The Alt-Ergo Users' Club</a>
</li>
<li><a href="#alt-ergo">Developing Alt-Ergo</a>
</li>
</ul>
</li>
<li><a href="#dolmen">Dolmen Library for Automated Deduction Languages</a>
</li>
</ul>
<p><a href="#ocaml">Contributions to OCaml</a></p>
<ul>
<li><a href="#opam">About opam, the OCaml Package Manager</a>
</li>
<li><a href="#flambda">The Flambda2 Optimizing Compiler</a>
</li>
</ul>
<p><a href="#meetups">Organizing OCaml Meetups</a></p>
<ul>
<li><a href="#oups">OCaml Users in PariS (OUPS)</a>
</li>
<li><a href="#octo">OCaml Meet-Up in Toulouse</a>
</li>
</ul>
<p><a href="#confs">Participation to External Events</a></p>
<ul>
<li><a href="#ocamlworkshop">The OCaml Workshop 2022 - ICFP Ljubljana</a>
</li>
<li><a href="#jfla2022">Journées Francophones Langages Applicatif 2022</a>
</li>
</ul>
<p></div></p>
<h2>
<a id="people" class="anchor"></a><a class="anchor-link" href="#people">Newcomers at OCamlPro</a>
</h2>
<p>OCamlPro is not just a R&D software company, we like to think about it
more as a team of people who like to work together. So, we are proud
to introduce you the incredible human beings that joined us in 2022:</p>
<ul>
<li>
<p><em>Pierre Villemot</em> joined us in June. After three years of research at the
Weizmann Institute on transcendental measures in Arithmetical Geometry, he was
recruited and became the main maintainer of the Alt-Ergo Theorem Prover.</p>
</li>
<li>
<p><em>Milàn Martos</em> joined us in July. He studied Chemistry and Computer Science at
ENS, and he holds an MBA. He joined the Team as a Presales Engineer and as a
Junior OCaml Web Developer.</p>
</li>
<li>
<p><em>Nathanaëlle Courant</em> joined us in September. She holds a Master's degree
from École Normale Supérieure in Paris, and is finishing her Ph.D. on
efficient and verified reduction and convertibility tests for theorem
provers. She joined OCamlPro in 2022 and works on the OCaml optimizer, in the
Flambda team.</p>
</li>
<li>
<p><em>Arthur Carcano</em> also joined us in September. Arthur is a Rust developer
interested in performance optimization, software design, and crafting
powerful and user-friendly tools. After completing his M.Sc. in Computer
Science at ENS Ulm, he obtained a Ph.D. in Mathematics and Computer Science
from Université de Paris.</p>
</li>
<li>
<p><em>Emilien Lemaire</em> joined us in December 2022. After an internship on
typechecking COBOL statements, he will be working with our COBOL
team on creating a studio of modern tools for COBOL.</p>
</li>
</ul>
<h2>
<a id="apps" class="anchor"></a><a class="anchor-link" href="#apps">Modernizing Core Parts of Real Life Applications</a>
</h2>
<p>We love to harness our IT expertise to give a competitive advantage to
our clients by modernizing core chunks of key infrastructures. For
example, we are working with the French Public Finances General
Directorate on two of their modernization projects, to reimplement the
language used for the computation of the Income Tax (<a href="#mlang">MLang</a>)
and to provide support on the GnuCOBOL compiler used by the MedocDB
application (<a href="#cobol">COBOL</a>).</p>
<h3>
<a id="mlang" class="anchor"></a><a class="anchor-link" href="#mlang">MLANG, keystone of the French citizens' Income Tax Calculation</a>
</h3>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/dgfip_2023_at_ocp.jpg">
<img alt="The M language, designed in the 80s to compute the French Income Tax, is still being rewritten in OCaml!" src="/blog/assets/img/dgfip_2023_at_ocp.jpg"/>
</a>
<div class="caption">
The M language, designed in the 80s to compute the French Income Tax, is still being rewritten in OCaml!
</div>
</p>
</div>
</p>
<p>In 2022, our work on MLANG has passed a significant milestone: our work may no
longer be considered a prototype! Code generation is now behaviourly compliant
with the upstream compiler. David focused on rewriting the C architecture,
which has been of great aid in iterating through each version of this new
implementation of MLANG.</p>
<p>As far as testing goes, we were allowed to compare the results of our
implementation against the ones of the upstream calculator, on real-life inputs
too. We are talking about calculations of immense scale, which entails a highly
performance-dependent project. Naturally, we managed to produce something of
equivalent performance which was a very important matter for our contractors
which have, since then, voiced their satisfaction. It is always great for us to
feel appreciated for our work.</p>
<p>The next step is to make a production-level language by the end of 2023, so
stay tuned if you are interested in this great project.</p>
<blockquote>
<p>Wondering what MLANG is ? Be sure to read <a href="https://ocamlpro.com/blog/2022_01_31_2021_at_ocamlpro/#mlang">last year's
post</a> on the matter.</p>
</blockquote>
<h3>
<a id="cobol" class="anchor"></a><a class="anchor-link" href="#cobol">Contributing to GnuCOBOL, the Free Open-Source COBOL Alternative</a>
</h3>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/COBOL_DEFENSE_2.jpg">
<img alt="Cobol is ran in gargantuan infrastructures of many an insurance companies and banks across the globe." src="/blog/assets/img/COBOL_DEFENSE_2.jpg"/>
</a>
<div class="caption">
Cobol is ran in gargantuan infrastructures of many an insurance companies and banks across the globe.
</div>
</p>
</div>
</p>
<p>In 2022, we started contributing to <a href="https://github.com/ocamlpro/gnucobol">the GnuCOBOL
project</a>: the GnuCOBOL compiler
is, today, the only free, open-source, industrial-grade alternative to
proprietary compilers by IBM and Micro-Focus. A cornerstone feature of
GnuCOBOL is its ability to understand proprietary extensions to COBOL
(dialects), to decrease the migration work of developers.</p>
<blockquote>
<p><a href="https://ocamlpro.com/blog/2022_01_31_2021_at_ocamlpro/#cobol">Last year's <code>at OCamlPro</code></a>
presented our gradual introduction to the
<a href="https://wikipedia.org/wiki/COBOL">COBOL</a> Universe as one of our
latest technical endeavours. In the beginning, our main objective was
to wrap our heads around the state of the environment for COBOL
developers.</p>
</blockquote>
<p>Our main contribution for now is to add support for the GCOS7 dialect,
to ease migration from obsolete GCOS Bull mainframes to a cluster of
PCs running GnuCOBOL for our first COBOL customer, the French
<a href="https://fr.wikipedia.org/wiki/Direction_g%C3%A9n%C3%A9rale_des_Finances_publiques">DGFIP</a>
(<em>Public Finances General Directorate</em>). We also contributed a few fixes and
small useful general features. Our contributions are gradually upstreamed in
the official version of GnuCOBOL.</p>
<p>The other part of our COBOL strategy is to progressively develop our
<a href="https://get-superbol.com/">SuperBOL Studio</a>, a set of modern tools
for COBOL, and especially GnuCOBOL, based on an OCaml parser for COBOL
that we have been improving during the year to support the full COBOL
standard. More on this next year, hopefully !</p>
<h2>
<a id="rust" class="anchor"></a><a class="anchor-link" href="#rust">Rust Expertise and Developments</a>
</h2>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/florian_gilcher_ferrous_praises_ocp.png">
<img alt="Kind words sent our way by Florian Gilcher (skade), managing director at Ferrous Systems!" src="/blog/assets/img/florian_gilcher_ferrous_praises_ocp.png"/>
</a>
<div class="caption">
Kind words sent our way by Florian Gilcher (skade), managing director at Ferrous Systems!
</div>
</p>
</div>
</p>
<p>OCamlPro's culture is one of human values and appeal for everything scientific.</p>
<p>Programming languages of all nature have caught our attention at some point in
time. As a consequence of years of expertise in all kinds of languages, we have
grown fond of strongly-typed, memory-safe ones. Eventually gravitating towards
Rust, we have since then invested significantly in adding this state-of-the-art
language to our toolsets, one of which being the <a href="https://training.ocamlpro.com/">trainings we deliver to
industrial actors</a> of various backgrounds to
help them grasp at such technological marvels.</p>
<p>Our trainers are qualified engineers, some of which have more than ten years of
experience in the industry in R&D, Formal Methods and embedded systems alike,
seven of which being solely in Rust.</p>
<p>Strong of our collective experiences, 2022 was indeed the stage for many
contributions and missions, some of which we will share with you right now.</p>
<h3>
<a id="ecore" class="anchor"></a><a class="anchor-link" href="#ecore">Ecore, a heart of Rust for EMF</a>
</h3>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/EMF_ARCHITECTURE.png">
<img alt="Ecore is the code generator at the heart of the EMF Architecture." src="/blog/assets/img/EMF_ARCHITECTURE.png"/>
</a>
<div class="caption">
Ecore is the code generator at the heart of the EMF Architecture.
</div>
</p>
</div>
</p>
<p>In 2022, we have seized the opportunity to work at the threshold between Java
and Rust for our clients and academic partners of the CEA (Commissariat aux
Énergies Atomiques et aux Énergies Alternatives). The product was a
Rust-written and Rust-encoded Java class hierarchy code generator.</p>
<p>Ecore is the core metamodel at the heart of the <a href="https://en.wikipedia.org/wiki/Eclipse_Modeling_Framework">Eclipse Modeling Framework
(EMF)</a>, which is used
internally at the CEA. Ecore is a superset of
<a href="https://en.wikipedia.org/wiki/Unified_Modeling_Language">UML</a> and allows for
the engineers of the CEA to express a Java class hierarchy through a graphical
interface. In practice, this allows for the generation of basic Java models for
the engineers to then build upon.</p>
<p>Our mission consisted in writing, in Rust, a new model generator for them to
use in their workflows and to make it capable of generating Rust code instead
of Java.</p>
<p>The cost for harnessing the objective qualities of a new implementation
in Rust was to have us tackle the scientific challenges pertaining to the
inherent structural differences between both languages. Our goal was to find a
way to encode, in Rust, a way to express the semantics of the hierarchy of
classes of Java, hence merging the worlds of Rust and Java on the way there.</p>
<p>Eventually, our partners were convinced the challenges were worth
the improved speed at which models were generated. Furthermore, the now
embedded-programming compliant platform, the runtime safety and even Rust's
broader WebAssembly-ready toolchain have cleared a new path for the
future iterations of their internal projects.</p>
<h3>
<a id="osource" class="anchor"></a><a class="anchor-link" href="#osource">Open-Source Rust Contributions</a>
</h3>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/ferris_stained_glass_2023.png">
<img alt="Ferris the Crab is the mascot of the Rust Language. No wonder why we converged as well!" src="/blog/assets/img/ferris_stained_glass_2023.png"/>
</a>
<div class="caption">
Ferris the Crab is the mascot of the Rust Language. No wonder why we converged as well!
</div>
</p>
</div>
</p>
<p>As we continue scouring the market for more and more Rust projects, and whenever
the opportunity shows up, we still actively contribute to the open-source
community, here are some of this year's OS work:</p>
<h4>
<a id="lean4" class="anchor"></a><a class="anchor-link" href="#lean4">Lean4</a>
</h4>
<p>Here's a project suited for all who, like us, are Formal Methods, functional
programming and formal methods enthousiasts:
<a href="https://leanprover.github.io/about/">Lean</a>:</p>
<blockquote>
<p>Lean is a functional programming language that makes it easy to write correct
and maintainable code. You can also use Lean as an interactive theorem prover.
Lean programming primarily involves defining types and functions. This allows
your focus to remain on the problem domain and manipulating its data, rather
than the details of programming.</p>
</blockquote>
<p>The list of our contributions to the <a href="https://github.com/leanprover/lean4">repository of
lean4</a>:</p>
<ul>
<li>Detection of a major <a href="https://leanprover.zulipchat.com/#narrow/stream/270676-lean4/topic/case.20in.20dependent.20match.20not.20triggering.20.28.3F.29/near/288328239">dependent pattern matching bug</a>
</li>
<li>Some QA with <a href="https://github.com/leanprover/lean4/pull/1844">unintuitive <code>calc</code> indentation</a>
</li>
<li>And some more with <a href="https://github.com/leanprover/lean4/pull/1811">strict indentation in nested <code>by</code>-s requirement</a>
</li>
</ul>
<h4>
<a id="matla" class="anchor"></a><a class="anchor-link" href="#matla">Matla, TLA+ Projects Manager</a>
</h4>
<p><a href="https://ocamlpro.com/blog/2022_01_31_2021_at_ocamlpro/#matla">Last year, we shared a sneakpeek of Matla</a>,
introducing its use-case and the motivations for implementing such manager for
TLA+ projects. As we tinkered with TLA+, sometimes <a href="https://github.com/tlaplus/tlaplus/issues/732">finding a
bug</a>, we continued our
development of Matla on the side.</p>
<p>The tool, although still a work-in-progress, has since then undergone a few
changes and <a href="https://github.com/OCamlPro/matla/releases">releases</a>:</p>
<ul>
<li><a href="https://github.com/OCamlPro/matla/pull/10">Implemented user feedback</a>
</li>
<li><a href="https://github.com/OCamlPro/matla/pull/8">Clap builder overhaul</a>
</li>
<li><a href="https://github.com/OCamlPro/matla/pull/1">Fixed a bug in temporal (lasso) cex parsing</a>
</li>
<li><a href="https://github.com/OCamlPro/matla/pull/7">Documentation efforts</a>
</li>
<li><a href="https://github.com/OCamlPro/matla/pull/6">Fix double quote parsing and no JRE error</a>
</li>
</ul>
<p>You are welcome to <a href="https://github.com/OCamlPro/matla">contribute</a> if you happen
to find yourself in the same situation we were in when we started the project.</p>
<h4>
<a id="agnos" class="anchor"></a><a class="anchor-link" href="#agnos">Agnos, for Let's Encrypt Wildcard Certificates</a>
</h4>
<p>Agnos is a single-binary program allowing you to easily obtain certificates
(including wildcards) from Let's Encrypt using DNS-01 challenges. It answers
Let's Encrypt DNS queries on its own, bypassing the need for API calls to your
DNS provider, and strives to offer a user-friendly and easy configuration.</p>
<p>Often, the best contributions are of a practical nature, which is the case for
<a href="https://github.com/krtab/agnos">Agnos</a>.</p>
<p>If that sounds interesting to you, you can learn more about it by reading
<a href="https://ocamlpro.com/blog/2022_10_05_agnos_0.1.0-beta/">this article</a>.</p>
<p>Make sure to give us some feedback when you end up using it!</p>
<h2>
<a id="wasm" class="anchor"></a><a class="anchor-link" href="#wasm">The WebAssembly Garbage Collection Working-Group</a>
</h2>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/wasm.png">
<img alt="WebAssembly is used to compile many languages to an efficient portable code for web-browsers." src="/blog/assets/img/wasm.png"/>
</a>
<div class="caption">
WebAssembly is used to compile many languages to an efficient portable code for web-browsers.
</div>
</p>
</div>
</p>
<p>Late 2022 was finally time for us to put into practice the knowledge we have
acquired about <a href="https://webassembly.org/">WebAssembly</a> over the years by
writing and presenting the first compiler of a real-world functional language
targeting the WasmGC proposal.</p>
<p>Although a <em>relatively</em> new technology, its great design, huge potential, and
already very tangible and interesting use-cases have not escaped our watch and
we are very happy to have kept a sharp eye on it.</p>
<blockquote>
<p>WebAssembly (abbreviated Wasm) is a binary instruction format for a
stack-based virtual machine. Wasm is designed as a <strong>portable compilation
target</strong> for programming languages, enabling deployment on the web for client
and server applications.</p>
</blockquote>
<p><a href="https://github.com/WebAssembly/gc">WasmGC</a> is the name of the on-going working
group and proposal towards eventually adding support for garbage collection to
Wasm. December 2022 saw a significant amount of work accomplished by both Léo
Andrès (whose thesis work is directed by Pierre and Jean-Christophe Filliâtre)
and Pierre Chambart towards finding viable compilation strategies from OCaml to
WasmGC. The goal was three-fold: make a prototype compiler to demonstrate the
soundness of the proposal, show that our compilation strategies were viable
and, finally, convince the commitee of the significance of the Wasm <code>i31ref</code>
for OCaml.</p>
<p>Our success in these three distinct points was paramount for OCaml, and other
languages that depend on the presence of <code>i31ref</code>, in order to one day benefit
from having WebAssembly as a natively supported compilation target for
Web-bound applications.</p>
<p>Here's a short listing of the work they have accomplished for that matter.
Please rest assured, <a href="https://discuss.ocaml.org/t/announcing-the-ocaml-wasm-organisation/12676/3">more detailed
explanations</a>
are to be expected on this very blog in the future so stay tuned!</p>
<ul>
<li><a href="https://github.com/WebAssembly/meetings/blob/main/gc/2023/GC-01-10.md">Introducing Wasocaml to the Wasm-GC
Group</a>
and demonstrating the OCaml's dependency on Wasm keeping <code>i31ref</code> in their GC
proposal.
</li>
<li><a href="https://github.com/OCamlPro/wasocaml">Wasocaml</a>, an OCaml compiler to Wasm.
Wasocaml is also the first compiler for a real-world functional language to
Wasm-GC.
</li>
<li><a href="https://github.com/OCamlPro/owi">owi</a>, an OCaml toolchain to work with Wasm.
It provides and interpreter as an executable and a library.
</li>
</ul>
<h2>
<a id="formal-methods" class="anchor"></a><a class="anchor-link" href="#formal-methods">Tooling for Formal Methods</a>
</h2>
<p>Programming languages theory is closely tied with the idea of proper
mathematical formalisation. Hence the strong scientific background in Formal
Methods that we draw from both for language design or formal verification for
cybersecurity.</p>
<h3>
<a id="prover" class="anchor"></a><a class="anchor-link" href="#prover">The Alt-Ergo Theorem Prover</a>
</h3>
<p>OCamlPro develops and maintains <a href="https://alt-ergo.ocamlpro.com/">Alt-Ergo</a>, an
automatic solver of mathematical formulas designed for program verification and
based on Satisfiability Modulo Theories (SMT) technology. Alt-Ergo was
initially created within the <a href="https://vals.lri.fr/">VALS</a> team at <a href="https://www.universite-paris-saclay.fr/en">University
of Paris-Saclay</a>.</p>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/Blackboard_with_formulas_and_geometry.jpg">
<img alt="Alt-Ergo proves mathematical formulas corresponding to software program properties." src="/blog/assets/img/Blackboard_with_formulas_and_geometry.jpg"/>
</a>
<div class="caption">
Alt-Ergo proves mathematical formulas corresponding to software program properties.
</div>
</p>
</div>
</p>
<h4>
<a id="club" class="anchor"></a><a class="anchor-link" href="#club">The Alt-Ergo Users' Club</a>
</h4>
<p>The Alt-Ergo Users' Club was launched in 2019. Its 4th annual meeting was held
in late March 2022.</p>
<p>These meetings allow us to keep track of our members' needs and demands as well
as keep them informed with the latest changes to the SMT Solver; they are the
lifeline of our Club and help us guarantee that the project lives on, despite
the enormous task it represents.</p>
<p>This is a good time to appreciate the scope of the project: Alt-Ergo is the
fruit of more than 10 years' worth of Research & Development. Currently
maintained by Pierre Villemot, whom we will introduce in the next section, as
full-time R&D engineer.</p>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/clubAE.png">
<img alt="The dedicated members of the Club!" src="/blog/assets/img/clubAE.png"/>
</a>
<div class="caption">
The dedicated members of the Club!
</div>
</p>
</div>
</p>
<p>This is the reason why we would like to thank our partners from the <a href="https://alt-ergo.ocamlpro.com/#club">Alt-Ergo
Users’ Club</a>, for their trust: Thales,
Trust-in-Soft, AdaCore, MERCE (Mitsubishi Electric R&D Centre Europe) and the
CEA. Their support allows us to maintain our tool.</p>
<p>Congratulations and many thanks to our partners at Trust-In-Soft, who have
upgraded their subscription to the Club to Gold-tier, allowing this great
open-source tool to live on!</p>
<h4>
<a id="alt-ergo" class="anchor"></a><a class="anchor-link" href="#alt-ergo">Developing Alt-Ergo</a>
</h4>
<p>In 2022, the Alt-Ergo team welcomed Pierre Villemot as full-time maintainer of
the project! His recruitement shows our commitment to the project's long term
maintenance and evolution. We are looking forward to seeing him take it to new
heights in future releases! Speaking of releases, 2022 has also been the stage
for Alt-Ergo's v2.4.2 minor release which introduced an update of the <code>labgltk</code>
library to version 3 and a set of bug fixes.</p>
<p>Now onto the more substantial changes to Alt-Ergo, the integration into
<code>next</code> of all the following:</p>
<ul>
<li>Integration of the SMT-LIB2 format parser
<a href="https://github.com/Gbury/dolmen">Dolmen</a> to Alt-Ergo's frontend;
</li>
<li>Improvement and test of models generation;
</li>
<li>Addition of mutually recursive functions for the legacy frontend <strong>and</strong> Dolmen alike;
</li>
<li>Significant amounts of documentation and code-cleaning;
</li>
<li>Implementation of systematical benchmarks of the SMT-LIB for regression prevention;
</li>
<li>Prototypical Dockerisation;
</li>
</ul>
<p>These are significative improvements to the User Experience and overall ergonomy
of the tool. You can already benefit from these changes by using Alt-Ergo's
<code>dev</code> version.</p>
<p>Finally, let us inform you that our candidacy for the DECYSIF project was
approved. Indeed, we and our partners at Adacore, Trust-In-Soft and the
<em>Laboratoire Méthodes Formelles</em> have been selected to conduct this funded
research project as consultant in Formal Methods. Now, we hope to be part of
collaborative R&D projects to further fund core Alt-Ergo developments. This
should allow us to deepen collaboration with old partners like the Why3 team at
the Formal Methods Lab (LMF) and the ProofinUse consortium members. Stay tuned!</p>
<h3>
<a id="dolmen" class="anchor"></a><a class="anchor-link" href="#dolmen">Dolmen Library for Automated Deduction Languages</a>
</h3>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/dolmen_2023.jpg">
<img alt="Dolmens are Neolithic megalithic structures composed of menhirs and they can range from a few centimeters to several meters high!" src="/blog/assets/img/dolmen_2023.jpg"/>
</a>
<div class="caption">
Dolmens are Neolithic megalithic structures composed of menhirs and they can range from a few centimeters to several meters high!
</div>
</p>
</div>
</p>
<p><a href="https://github.com/Gbury/dolmen">Dolmen</a> is an OCaml Library developed by
Guillaume Bury as part of our Research and Development processes around Formal
Methods and our development efforts for our automated SMT-Solver
<a href="https://alt-ergo.ocamlpro.com/">Alt-Ergo</a>.</p>
<p>Dolmen is a testimony of our push towards standardised input languages for
SMT-Solvers. Indeed, it provides flexible Menhir parsers and typecheckers for
several languages used in automated deduction such as: smtlib, ae (Alt-Ergo),
dimacs, iCNF, tptp and zf (zipperposition). And so, Dolmen aims at encompassing
the largest amount of input languages for automated deduction as possible and
provides the OCaml community with a centralised solution for their input
parsing and typechecking, hence keeping them from having to reimplement them
each time.</p>
<p>Furthermore, the Dolmen binary is used by the maintainers of the SMTLIB in order
to assert that newly submitted benchmarks to the SMTLIB are compliant with the
specification which makes Dolmen its <em>de facto</em> reference implementation. In
time, Dolmen will become the default frontend for Alt-Ergo and, we hope, any
other OCaml written SMT-Solver from now on.</p>
<h2>
<a id="ocaml" class="anchor"></a><a class="anchor-link" href="#ocaml">Contributions to OCaml</a>
</h2>
<p>Last but not least, OCamlPro’s DNA built itself on one of the most powerful and
elegant programming languages ever, born from more than 30 years of French
public Research in Computer Science, and widely used in safety critical
industries. OCaml’s traits pervasively inspired many new languages (like F#).
We are proud to be part of this great community of researchers and computer
scientists.</p>
<h3>
<a id="opam" class="anchor"></a><a class="anchor-link" href="#opam">About opam, the OCaml Package Manager</a>
</h3>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/opam-banniere-e1600868011587.png">
<img alt="opam, the OCaml Package Manager, remains one of OCamlPro's greatest achievements!" src="/blog/assets/img/opam-banniere-e1600868011587.png"/>
</a>
<div class="caption">
opam, the OCaml Package Manager, remains one of OCamlPro's greatest achievements!
</div>
</p>
</div>
</p>
<p>2022 has been the theatre of a sustained and continuous effort from the opam
team.</p>
<p>The fruits of their labor have been compiled into an <a href="https://opam.ocaml.org/blog/opam-2-2-0-alpha/">alpha release of version
2.2.0</a> by June 28th
2023, so here is a taste of what should make the final <code>2.2.0</code> version of opam a
treat for its users:</p>
<ul>
<li>Windows support: opam 2.2 comes with native Windows compatibility. You can now
use opam from your preferred Windows terminal!
</li>
<li>Recursive pinning: allows to have opam lookup for opam files
into subdirectories.
</li>
<li>Software Heritage binding: opam now integrates a fallback to Software Heritage
archive retrieval, based on SWHID. If an SWHID url is present in an opam file,
the fallback can be activated.
</li>
<li>Enhanced features for developers: as the development tools variable to share
a development setup, the <code>opam tree</code> command to have better an overview of
dependencies, new pinning subcommands, and so on.
</li>
</ul>
<p>That being said, 2022 was a very special year for opam. Indeed, 10 years prior,
on the 26th of June 2012, OCamlPro birthed version
<a href="https://github.com/ocaml/opam/releases/tag/0.1"><code>0.1</code></a> of what was to become
the official OCaml Package Manager, the cornerstone of the OCaml environment.</p>
<p>It was no small feat to make opam what it is today. It took approximately 5
years to bring <a href="https://opam.ocaml.org/blog/opam-1-0-0-released/"><code>1.0.0</code></a> up
to <a href="https://opam.ocaml.org/blog/opam-2-0-0/"><code>2.0.0</code></a> and another 3 to reach
<a href="https://opam.ocaml.org/blog/opam-2-1-0/"><code>2.1.0</code></a> all the while ensuring
changes were compliant with the current ecosystem (opam
repository, OCaml tooling) and the public's feedback and vision.</p>
<p><em><strong>This work is allowed thanks to Jane Street's funding.</strong></em></p>
<h3>
<a id="flambda" class="anchor"></a><a class="anchor-link" href="#flambda">The Flambda2 Optimizing Compiler</a>
</h3>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/flambda_camel_2023.png">
<img alt="Flambda2 is a powerful code optimizer for the OCaml compiler strong of many years of R&D." src="/blog/assets/img/flambda_camel_2023.png"/>
</a>
<div class="caption">
Flambda2 is a powerful code optimizer for the OCaml compiler strong of many years of R&D.
</div>
</p>
</div>
</p>
<p>OCamlPro is proud to be working on Flambda2, an ambitious OCaml
optimizing compiler project, initiated with Mark Shinwell from Jane
Street, our long-term partner and client. Flambda2 builds upon its
predecessor: Flambda, which focused on reducing the runtime cost of
abstractions and removing as many short-lived allocations as
possible. Thus, Flambda2 not only shines with the maturity and
experience its architects acquired through years worth of R&D and
dev-time for Flambda, but it improves upon them.</p>
<p>In 2022, Flambda2 was for the first time used for production workloads
and has been ever since! Indeed, we can officially say that Flambda2
left the realm of the prototype to enter one of real-life,
production-tested software for which we continue to provide
development and support as it has been for years now.</p>
<p>This achievement comes along having our engineers take more and more
action in maintaining the OCaml compiler. Being part of the OCaml
Core-Team is an honour.</p>
<p>Finally, in 2022, the Flambda Team welcomed a new member: Nathanaëlle
Courant will be joining forces with Pierre Chambart, Damien Doligez,
Vincent Laviron, Guillaume Bury to tackle the challenges inherent to
maintaining Flambda2 and that of the Core-Team.</p>
<p>If you are interested in more things Flambda2, stay tuned in with our blog,
there should be a series of very interesting articles coming up in the not-so
distant future!</p>
<p><em><strong>This work is allowed thanks to Jane Street's funding.</strong></em></p>
<p>In other OCaml compiler news, 2022 was also the year of the official release of
OCaml 5.0.0, also known as Multi-Core, on the 16th of December. This major
release introduced a new runtime system with support for shared memory
parallelism and effect handlers! This fabulous milestone is brought to us by the
joined work of the amazing people of the OCaml Core-Team; among them some of our
own.</p>
<p>Many thanks to all who take part in uncovering the yet untrodden paths of the
OCaml distribution!</p>
<p>What a time to be an OCaml enthousiast!</p>
<h2>
<a id="meetups" class="anchor"></a><a class="anchor-link" href="#meetups">Organizing Meetups for the OCaml Community</a>
</h2>
<h3>
<a id="oups" class="anchor"></a><a class="anchor-link" href="#oups">OCaml Users in PariS (OUPS)</a>
</h3>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/camels_going_to_oups.jpg">
<img alt="Camels going to their pluri-annual OUPS Meet-up." src="/blog/assets/img/camels_going_to_oups.jpg"/>
</a>
<div class="caption">
Camels going to their pluri-annual OUPS Meet-up.
</div>
</p>
</div>
</p>
<p>Just under 10 years ago, Fabrice Le Fessant initiated the <a href="https://www.meetup.com/fr-FR/ocaml-paris/events/99222322/">very first OCaml
Users in Paris</a>.</p>
<p>This event allowed OCaml users in Paris, professionals and amateurs alike, to
meet and exchange on OCaml novelties. This is still the case and the organising
crew now includes several people of diverse affiliations, maintaining the
purpose of this friendly event.</p>
<p>Every two months or so, the organisers reach out to the community, hail
volunteers and select presentations of on-going works. When the time and place
is settled, the <code>ocaml-paris</code> Meetup members are informed by various means.
The OCaml Users in PariS meetup is the place to enthusiastically share
knowledge and a few pizzas. It is supported by the <a href="https://ocaml-sf.org/">OCaml Software
Foundation</a> who graciously pays for the pizzas.</p>
<blockquote>
<p><strong>You can register to the OCaml Users in PariS (OUPS) meetup group
<a href="https://www.meetup.com/ocaml-paris/">here</a></strong>.</p>
</blockquote>
<p>Here are all the relevant links to the talks that happened in Paris in 2022:</p>
<ul>
<li><a href="https://www.meetup.com/ocaml-paris/events/284313963/">10th March 2022</a>
</li>
<li><a href="https://www.meetup.com/ocaml-paris/events/285435718/">12th May 2022</a>
</li>
<li><a href="https://www.meetup.com/ocaml-paris/events/288520108/">29th September 2022</a>
</li>
<li><a href="https://www.meetup.com/ocaml-paris/events/289909374/">8th December 2022</a>
</li>
</ul>
<h3>
<a id="octo" class="anchor"></a><a class="anchor-link" href="#octo">OCaml Meet-Up in Toulouse</a>
</h3>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/Hopital_de_la_Grave-Toulouse-2012-06-23.jpg">
<img alt="Toulouse also has its set of enthousiastic OCaml supporters." src="/blog/assets/img/Hopital_de_la_Grave-Toulouse-2012-06-23.jpg"/>
</a>
<div class="caption">
Toulouse also has its set of enthousiastic OCaml supporters.
</div>
</p>
</div>
</p>
<p>Fortunately for OCaml Users that live in the French South-West, <a href="https://www.meetup.com/ocaml-toulouse/events/288464047/">a new Meet-up
is now available</a> to
them. On the 11th of October 2022, the first OCaml meet-up in
<a href="https://en.wikipedia.org/wiki/Toulouse">Toulouse</a> happened.</p>
<p>The first occurence of the OCaml Users in Toulouse Meetup kicked off with Erik
Martin-Dorel (OCaml Software Foundation) presenting
<a href="https://ocaml-sf.org/learn-ocaml/"><code>Learn-OCaml</code></a> who was then followed by
David Declerck (OCamlPro) presenting his
<a href="https://github.com/ocamlpro/ocaml-canvas"><code>OCaml-Canvas</code></a> graphics library for
OCaml.</p>
<blockquote>
<p><strong>You can register to the OCaml Meet-Up in Toulouse group
<a href="https://www.meetup.com/ocaml-toulouse/">here</a></strong>.</p>
</blockquote>
<p>Here's to sharing a slice or two with you soon!</p>
<h2>
<a id="confs" class="anchor"></a><a class="anchor-link" href="#confs">Participation to External Events</a>
</h2>
<h3>
<a id="ocamlworkshop" class="anchor"></a><a class="anchor-link" href="#ocamlworkshop">The OCaml Workshop 2022 - ICFP Ljubljana</a>
</h3>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/ljubjana_slovenie_icfp_2022.jpg">
<img alt="ICFP 2022 took place in the beautiful town of Ljubjana, Slovenia." src="/blog/assets/img/ljubjana_slovenie_icfp_2022.jpg"/>
</a>
<div class="caption">
ICFP 2022 took place in the beautiful town of Ljubjana, Slovenia.
</div>
</p>
</div>
</p>
<p>The OCaml Workshop is an international conference that focuses on everything
OCaml and is part of the ICFP (International Conference on Functional
Programming).</p>
<p>We attended many of these and have presented numerous papers throughout the
years.</p>
<p>In 2022, a paper co-authored by the maintainers of opam, the OCaml Package
Manager, was submitted and approved for presentation: "Supporting a decade of
opam".</p>
<p>You can find the textual references of the talk
<a href="https://icfp22.sigplan.org/details/ocaml-2022-papers/11/Supporting-a-decade-of-opam">here</a>
and a replay of the presentation
<a href="https://watch.ocaml.org/w/1rWj4jYyaDkmMjdH4KNcv6">there</a>.</p>
<p>You can expect more papers and interesting talks coming from us in upcoming
editions of the conference!</p>
<h3>
<a id="jfla2022" class="anchor"></a><a class="anchor-link" href="#jfla2022">Journées Francophones Langages Applicatifs 2022</a>
</h3>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/picture_jfla2022_domaine_essendieras.jpg">
<img alt="the JFLA'2022 took place in the beautiful Domaine d'Essendiéras in Périgord, France." src="/blog/assets/img/picture_jfla2022_domaine_essendieras.jpg"/>
</a>
<div class="caption">
the JFLA'2022 took place in the beautiful Domaine d'Essendiéras in Périgord, France.
</div>
</p>
</div>
</p>
<p>Among the many scientific conferences we attend on an annual basis, the
<a href="https://jfla.inria.fr/">JFLA</a> (<em>Journée Francophones des Langages Applicatifs</em>
or <em>French-Speaking annual gathering on Application Programming Languages</em>,
mainly Functional Languages) is the one we feel most at home since 2016.</p>
<p>Ever since have we remained among their faithful supporters and participants.
This gathering of many of our fellow French computer scientists and industrial
actors alike has been our go-to conference to catch-up with and present our
work. The 2022 edition was no exception!</p>
<p>We submitted and presented the following papers:</p>
<ul>
<li><a href="https://ocamlpro.com/blog/2021_10_14_verification_for_dummies_smt_and_induction/">Mikino, formal verification made accessible (link to dedicated
blogpost)</a>;
</li>
<li>Connecting Software Heritage with the OCaml ecosystem;
</li>
<li>Alt-Ergo-Fuzz, hunting the bugs of the bug hunter;
</li>
</ul>
<p>You can find a more detailed recounting of our JFLA2022 submissions in <a href="https://ocamlpro.com/blog/2022_07_12_ocamlpro_at_the_jfla2022/">this
blog post</a> as
well as the links to the actual (french-written) submitted papers.</p>
<h2>
<a id="conclusion" class="anchor"></a><a class="anchor-link" href="#conclusion">Conclusion</a>
</h2>
<p>As always, we warmly thank all our clients, partners, and friends, for their
support and collaboration throughout the year,</p>
<p>And to you, dear reader, thank you for tagging along,</p>
<p>Since 2011 with love,</p>
<p>The OCamlPro Team</p>
Autofonce, GNU Autotests Revisitedhttps://ocamlpro.com/blog/2023_03_18_autofonce2023-06-27T08:12:13Z2023-06-27T08:12:13Z
Fabrice Le Fessant
Since 2022, OCamlPro has been contributing to GnuCOBOL, the only fully open-source compiler for the COBOL language. To speed-up our contributions to the compiler, we developed a new tool, autofonce, to be able to easily run and modify the testsuite of the compiler, originally written as a GNU Autoco...<p></p>
<p>Since 2022, OCamlPro has been contributing to GnuCOBOL, the only fully
open-source compiler for the COBOL language. To speed-up our
contributions to the compiler, we developed a new tool, <code>autofonce</code>,
to be able to easily run and modify the testsuite of the compiler,
originally written as a GNU Autoconf testsuite. This article describes
this tool, that could be useful for other project testsuites.</p>
<p><div id="tableofcontents">
<strong>Table of contents</strong></p>
<ul>
<li><a href="#introduction">Introduction</a>
</li>
<li><a href="#gnucobol">The Gnu Autoconf Testsuite of GnuCOBOL</a>
</li>
<li><a href="#autofonce">Main Features of Autofonce</a>
</li>
<li><a href="#conclusion">Conclusion</a>
</div>
</li>
</ul>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/autofonce-2023.png">
<img alt="Autofonce is a modern runner for GNU Autoconf Testsuite" src="/blog/assets/img/autofonce-2023.png"/>
</a>
<div class="caption">
Autofonce is a modern runner for GNU Autoconf Testsuite
</div>
</p>
</div>
</p>
<h2>
<a id="introduction" class="anchor"></a><a class="anchor-link" href="#introduction">Introduction</a>
</h2>
<p>Since 2022, OCamlPro has been involved in a big modernization project
for the French state: the goal is to move a large COBOL application,
running on a former Bull mainframe
(<a href="https://fr.wikipedia.org/wiki/General_Comprehensive_Operating_System">GCOS</a>)
to a cluster of Linux computers. The choice was made to use the most
open-source compiler, <a href="https://gnucobol.sourceforge.io/">GnuCOBOL</a>,
that had already been used in such projects.</p>
<p>One of the main problems in such migration projects is that most COBOL
proprietary compilers provide extensions to the COBOL language
standard, that are not supported by other compilers. Fortunately,
GnuCOBOL has good support for several mainstream COBOL dialects, such
as IBM or Micro-Focus ones. Unfortunately, GnuCOBOL had no support at
the time for the GCOS COBOL dialect developed by Bull for its
mainframes.</p>
<p>As a consequence, OCamlPro got involved in the project to extend
GnuCOBOL with the support for the GCOS dialect needed for the
application. This work implied a lot of (sometimes very deep)
<a href="https://github.com/OCamlPro/gnucobol/pulls?q=is%3Apr+is%3Aclosed">modifications</a>
of the compiler and its runtime library, both of them written in the C
language. And of course, our modifications had first to pass the large
existing testsuite of COBOL examples, and then extend it with new
tests, so that the new dialect would continue to work in the future.</p>
<p>This work lead us to develop <a href="https://github.com/OCamlPro/autofonce"><code>autofonce</code>, a modern open-source
runner</a> for GNU Autoconf
Testsuites, the framework used in GnuCOBOL to manage its
testsuite. Our tool is available on Github, with Linux and Windows
binaries on the <a href="https://github.com/OCamlPro/autofonce/releases">release page</a>.</p>
<h2>
<a id="gnucobol" class="anchor"></a><a class="anchor-link" href="#gnucobol">The GNU Autoconf Testsuite of GnuCOBOL</a>
</h2>
<p><a href="https://www.gnu.org/software/autoconf/">GNU Autoconf</a> is a set of
powerful tools, developed to help developers of open-source projects
to manage their projects, from configuration steps to testing and
installation. As a very old technology, GNU Autoconf relies heavily on
<a href="https://www.gnu.org/software/m4/manual/m4.html">M4 macros</a> both as
its own development language, and as its extension language, typically
for tests.</p>
<p>In GnuCOBOL, the testsuite is in a <a href="https://github.com/OCamlPro/gnucobol/tree/gcos4gnucobol-3.x/tests">sub-directory <code>tests/</code></a>, containing
a file <a href="https://github.com/OCamlPro/gnucobol/blob/gcos4gnucobol-3.x/tests/testsuite.at"><code>testsuite.at</code></a>, itself including other files from a
sub-directory <a href="https://github.com/OCamlPro/gnucobol/blob/gcos4gnucobol-3.x/tests/testsuite.src"><code>testsuite.src/</code></a>.</p>
<p>As an example, a typical test from <a href="https://github.com/OCamlPro/gnucobol/blob/gcos4gnucobol-3.x/tests/testsuite.src/syn_misc.at">syn_copy.at</a> looks like:</p>
<pre><code class="language-COBOL">AT_SETUP([INITIALIZE constant])
AT_KEYWORDS([misc])
AT_DATA([prog.cob], [
IDENTIFICATION DIVISION.
PROGRAM-ID. prog.
DATA DIVISION.
WORKING-STORAGE SECTION.
01 CON CONSTANT 10.
01 V PIC 9.
78 C78 VALUE 'A'.
PROCEDURE DIVISION.
INITIALIZE CON.
INITIALIZE V.
INITIALIZE V, 9.
INITIALIZE C78, V.
])
AT_CHECK([$COMPILE_ONLY prog.cob], [1], [],
[prog.cob:10: error: invalid INITIALIZE statement
prog.cob:12: error: invalid INITIALIZE statement
prog.cob:13: error: invalid INITIALIZE statement
])
AT_CLEANUP
</code></pre>
<p>Actually, we were quite pleased by the syntax of tests, it is easy to
generate test files (using <code>AT_DATA</code> macro) and to test the execution
of commands (using <code>AT_CHECK</code> macro), checking its exit code, its
standard output and error output separately. It is even possible to
combine checks to run additional checks in case of error or
success. In general, the testsuite is easy to read and complete.</p>
<p>However, there were still some issues:</p>
<ul>
<li>
<p>At every update of the code or the tests, the testsuite runner has to be recompiled;</p>
</li>
<li>
<p>Running the testsuite requires to be in the correct sub-directory,
typically within the <code>_build/</code> sub-directory;</p>
</li>
<li>
<p>By default, tests are ran sequentially, even when many cores are available.</p>
</li>
<li>
<p>The output is pretty verbose, showing all tests that have been executed. Failed tests are often lost in the middle of other successful tests, and you have to wait for the end of the run to start investigating them;</p>
<pre><code class="language-shell">## -------------------------------------------- ##
## GnuCOBOL 3.2-dev test suite: GnuCOBOL Tests. ##
## -------------------------------------------- ##
General tests of used binaries
1: compiler help and information ok
2: compiler warnings ok
3: compiler outputs (general) ok
4: compiler outputs (file specified) ok
5: compiler outputs (path specified) ok
6: compiler outputs (assembler) ok
7: source file not found ok
8: temporary path invalid ok
9: use of full path for cobc ok
10: C Compiler optimizations ok
11: invalid cobc option ok
12: cobcrun help and information ok
13: cobcrun validation ok
14: cobcrun -M DSO entry argument ok
15: cobcrun -M directory/ default ok
[...]
</code></pre>
</li>
<li>
<p>There is no automatic way to update tests, when their output has changed.
Every test has to be updated manually.</p>
</li>
<li>
<p>In case of error, it is not always easy to rerun a specific test
within its directory.</p>
</li>
</ul>
<p>With <code>autofonce</code>, we tried to solve all of these issues...</p>
<h2>
<a id="autofonce" class="anchor"></a><a class="anchor-link" href="#autofonce">Main Features of Autofonce</a>
</h2>
<p><code>autofonce</code> is written in a modern language, OCaml, so that it can
handle a large testsuite much faster than GNU Autoconf. Since we do
not expect users to have an OCaml environment set up, we provide
binary versions of <code>autofonce</code> for both Linux (static executable) and
Windows (cross-compiled executable) on Github.</p>
<p><code>autofonce</code> does not use <code>m4</code>, instead, it has a limited support for a
small set of predefined m4 macros, typically supporting m4 escape
sequences (quadrigraphs), but not the addition of new m4 macros, and
neither the execution of shell commands outside of these macros (yes,
testsuites in GNU Autoconf are actually <code>sh</code> shell scripts with m4
macros...). In the case of GnuCOBOL, we were lucky enough that the
testsuite was well written and avoided such problems (we had to fix
only a few of them, such as including shell commands into <code>AT_CHECK</code>
macros). The syntax of tests is <a href="https://ocamlpro.github.io/autofonce/sphinx/format.html">documented here</a>.</p>
<p>Some interesting features of <code>autofonce</code> are :</p>
<ul>
<li>
<p><code>autofonce</code> executes the tests in parallel by default, using as many
cores as available. Only failed tests are printed, so that the
developer can immediately start investigating them;</p>
</li>
<li>
<p><code>autofonce</code> can be run from any directory in the project. A
<a href="https://github.com/OCamlPro/gnucobol/blob/gcos4gnucobol-3.x/.autofonce"><code>.autofonce</code> file</a> has to be present at the root of the project, to
describe where the tests are located and in which environment they
should be executed;</p>
</li>
<li>
<p><code>autofonce</code> makes it easy to re-execute a specific test that failed,
by generating, within the test sub-directory, a script for every
step of the test;</p>
</li>
<li>
<p><code>autofonce</code> provides many options to filter which tests should be
executed. Tests can be specified by number, range of numbers,
keywords, or negative keywords. The complete list of options is
easily printable using <code>autofonce run --help</code> for example;</p>
</li>
</ul>
<p>Additionnally, <code>autofonce</code> implements a powerful promotion mechanism
to update tests, with the <a href="https://ocamlpro.github.io/autofonce/sphinx/commands.html#autofonce-promote"><code>autofonce promote</code>
sub-command</a>. For
example, if you update a warning message in the compiler, you would
like all tests where this message appears to be modified. With
<code>autofonce</code>, it is as easy as:</p>
<pre><code class="language-shell"># Run all tests at least once
autofonce run
# Print the patch that would be applied in case of promotion
autofonce promote
# Apply the patch above
autofonce promote --apply
# Combine running and promotion 10 times:
autofonce run --auto-promote 10
</code></pre>
<p>The last command iterates promotion 10 times: indeed, since a test may
have multiple checks, and only the first failed check of the test will
be updated during one iteration (because the test aborts at the first
failed check), as many iterations as the maximal number of failed
checks within a test may be needed.</p>
<p>Also, as for GNU Autoconf, <code>autofonce</code> generates a final log file containing the results with a full log of errors and files needed to reproduce the error. This file can be uploaded into the artefacts of a CI system to easily debug errors after a CI failure.</p>
<h2>
<a id="conclusion" class="anchor"></a><a class="anchor-link" href="#conclusion">Conclusion</a>
</h2>
<p>During our work on GnuCOBOL, <code>autofonce</code> improved a lot our user
experience of running the testsuite, especially using the
auto-promotion feature to update tests after modifications.</p>
<p>We hope <code>autofonce</code> could be used for other open-source projects that
already use the GNU Autoconf testsuite. Of course, it requires that
the testsuite does not make heavy use of shell features and mainly
relies on standard m4 macros.</p>
<p>We found that the format of GNU Autoconf tests to be quite powerful to
easily check exit codes, standard outputs and error outputs of shell
commands. <code>autofonce</code> could be used to help using this format in
projects, that do not want to rely on an old tool like GNU Autoconf,
and are looking for a much more modern test framework.</p>
Sub-single-instruction Peano to machine integer conversionhttps://ocamlpro.com/blog/2023_01_23_Pea_No_Op2023-01-23T08:12:13Z2023-01-23T08:12:13Z
Arthur Carcano
It is a rainy end of January in Paris, morale is getting soggier by the day, and the bulk of our light exposure needs are now fulfilled by our computer screens as the sun seems to have definitively disappeared behind a continuous stream of low-hanging clouds. But, all is not lost, the warm rays of c...<p></p>
<p><img src="/blog/assets/img/forgive_me_father.png" alt="" /></p>
<p>It is a rainy end of January in Paris, morale is getting soggier by the day, and the bulk of our light exposure needs are now fulfilled by our computer screens as the sun seems to have definitively disappeared behind a continuous stream of low-hanging clouds. But, all is not lost, the warm rays of comradeship pierce through the bleak afternoons, and our joyful <a href="https://ocamlpro.com/team">party</a> of adventurers once again embarked on an adventure of curiosity and rabbit-hole depth-first-searching.</p>
<p>Last week's quest led us to a treasure coveted by a mere handful of enlightened connoisseurs, but a treasure of tremendous nerdy-beauty, known to the academics as "Sub-single-instruction Peano to machine integer conversion" and to the locals as "How to count how many nested <code>Some</code> there are very very fast by leveraging druidic knowledge about unspecified, undocumented, and unstable behavior of the Rust compiler".</p>
<h1>Our quest begins</h1>
<p>Our whole quest started when we wanted to learn more about discriminant elision. Discriminant elision in Rust is part of what makes it practical to use <code>Option<&T></code> in place of <code>*const T</code>. More precisely it is what allows <code>Option<&T></code> to fit in as much memory as <code>*const T</code>, and not twice as much. To understand why, let's consider an <code>Option<u64></code>. An <code>u64</code> is 8 bytes in size. An <code>Option<u64></code> should have at least one more bit, to indicate whether it is a <code>None</code>, or a <code>Some</code>. But bits are not very practical to deal with for computers, and hence this <em>discriminant</em> value -- indicating which of the two variants (<code>Some</code> or <code>None</code>) the value is -- should take up at least one byte. Because of <a href="https://doc.rust-lang.org/reference/type-layout.html#the-default-representation">alignment requirements</a> (and because the size is always a multiple of the alignment) it actually ends up taking 8 bytes as well, so that the whole <code>Option<u64></code> occupies twice the size of the wrapped <code>u64</code>.</p>
<p>In languages like C, it is very common to pass around pointers, and give them a specific meaning if they are null. Typically, a function like <a href="https://linux.die.net/man/3/lfind"><code>lfind</code></a> which searches for an element in a array will return a pointer to the matching element, and this pointer will be null if no such element was found. In Rust however fallibility is expected to be encoded in the type system. Hence, functions like <a href="https://doc.rust-lang.org/core/iter/trait.Iterator.html#method.find"><code>find</code></a> returns a reference, wrapped in a <code>Option</code>. Because this kind of API is so ubiquitous, it would have been a real hindrance to Rust adoption if it took twice as much space as the C version.</p>
<p>This is why discriminant elision exists. In our <code>Option<&T></code> example Rust can leverage the same logic as C: <code>&T</code> references in Rust are guaranteed to be -- among other things -- non-null. Hence Rust can choose to encode the <code>None</code> variant as the null value of the variable. Transparently to the user, our <code>Option<&T></code> now fits on 8 bytes, the same size as a simple <code>&T</code>. But Rust discriminant elision mechanism goes beyond <code>Option<&T></code> and works for any general type if:</p>
<ol>
<li>The option-like value has one fieldless variant and one single-field variant
</li>
<li>The wrapped type has so-called niche values, that is values that are statically known to never be valid for said type.
</li>
</ol>
<p>Discriminant elision remains under-specified, but more information can be found in the <a href="https://rust-lang.github.io/unsafe-code-guidelines/layout/enums.html#discriminant-elision-on-option-like-enums">FFI guidelines</a>. Note that other unspecified situations seem to benefit from niche optimization (e.g. <a href="https://github.com/rust-lang/rust/pull/94075/">PR#94075</a>).</p>
<h1>Too many options</h1>
<p>Out of curiosity, we wanted to investigate how the Rust compiler represents a series of nested <code>Option</code>. It turns out that up to 255 nested options can be stored into a byte, which is also the theoretical limit. Because this mechanism is not limited to <code>Option</code>, we can use it with (value-level) <a href="https://en.wikipedia.org/wiki/Peano_axioms">Peano integers</a>. Peano integers are a theoretical encoding of integer in "unary base", but it is enough for this post to consider them a fun little gimmick. If you want to go further, know that Peano integers are more often used at the type-level, to try to emulate type-level arithmetic.</p>
<p>In our case, we are mostly interested in Peano-integers at the value level. We define them as follows:</p>
<pre><code class="language-rust">#![recursion_limit = "512"]
#![allow(dead_code)]
/// An empty enum, a type without inhabitants.
/// Cf: https://en.wikipedia.org/wiki/Bottom_type
enum Null {}
/// PeanoEncoder<Null> is a Peano-type able to represent integers up to 0.
/// If T is a Peano-type able to represent integers up to n
/// PeanoEncoder<T> is a Peano-type able to represent integers up to n+1
#[derive(Debug)]
enum PeanoEncoder<T> {
Successor(T),
Zero,
}
macro_rules! times2 {
($peano_2x:ident, $peano_x:ident ) => {
type $peano_2x<T> = $peano_x<$peano_x<T>>;
};
}
times2!(PeanoEncoder2, PeanoEncoder);
times2!(PeanoEncoder4, PeanoEncoder2);
times2!(PeanoEncoder8, PeanoEncoder4);
times2!(PeanoEncoder16, PeanoEncoder8);
times2!(PeanoEncoder32, PeanoEncoder16);
times2!(PeanoEncoder64, PeanoEncoder32);
times2!(PeanoEncoder128, PeanoEncoder64);
times2!(PeanoEncoder256, PeanoEncoder128);
type Peano0 = PeanoEncoder<Null>;
type Peano255 = PeanoEncoder256<Null>;
</code></pre>
<p>Note that we cannot simply go for</p>
<pre><code class="language-rust">enum Peano {
Succesor(Peano),
Zero,
}
</code></pre>
<p>like in <a href="https://wiki.haskell.org/Peano_numbers">Haskell</a> or OCaml because without indirection the type has <a href="https://doc.rust-lang.org/error_codes/E0072.html">infinite size</a>, and adding indirection would break discriminant elision. What we really have is that we are actually using a <em>type-level</em> Peano-encoding of integers to create a type <code>Peano256</code> that contains <em>value-level</em> Peano-encoding of integers up to 255, as a byte would.</p>
<p>We can define the typical recursive pattern matching based way of converting our Peano integer to a machine integer (a byte).</p>
<pre><code class="language-rust">trait IntoU8 {
fn into_u8(self) -> u8;
}
impl IntoU8 for Null {
fn into_u8(self) -> u8 {
match self {}
}
}
impl<T: IntoU8> IntoU8 for PeanoEncoder<T> {
fn into_u8(self) -> u8 {
match self {
PeanoEncoder::Successor(x) => 1 + x.into_u8(),
PeanoEncoder::Zero => 0,
}
}
}
</code></pre>
<p>Here, according to <a href="https://godbolt.org/z/hfdKdxe19">godbolt</a>, <code>Peano255::into_u8</code> gets compiled to more than 900 lines of assembly, which resembles a binary decision tree with jump-tables at the leaves.</p>
<p>However, we can inspect a bit how rustc represents a few values:</p>
<pre><code class="language-rust">println!("Size of Peano255: {} byte", std::mem::size_of::<Peano255>());
for x in [
Peano255::Zero,
Peano255::Successor(PeanoEncoder::Zero),
Peano255::Successor(PeanoEncoder::Successor(PeanoEncoder::Zero)),
] {
println!("Machine representation of {:?}: {}", x, unsafe {
std::mem::transmute::<_, u8>(x)
})
}
</code></pre>
<p>which gives</p>
<pre><code>Size of Peano255: 1 byte
Machine representation of Zero: 255
Machine representation of Successor(Zero): 254
Machine representation of Successor(Successor(Zero)): 253
</code></pre>
<p>A pattern seems to emerge. Rustc chooses to represent <code>Peano255::Zero</code> as 255, and each successor as one less.</p>
<p>As a brief detour, let's see what happens for <code>PeanoN</code> with other values of N.</p>
<pre><code class="language-rust">let x = Peano1::Zero;
println!("Machine representation of Peano1::{:?}: {}", x, unsafe {
std::mem::transmute::<_, u8>(x)
});
for x in [
Peano2::Successor(PeanoEncoder::Zero),
Peano2::Zero,
] {
println!("Machine representation of Peano2::{:?}: {}", x, unsafe {
std::mem::transmute::<_, u8>(x)
})
}
</code></pre>
<p>gives</p>
<pre><code>Machine representation of Peano1::Zero: 1
Machine representation of Peano2::Successor(Zero): 1
Machine representation of Peano2::Zero: 2
</code></pre>
<p>Notice that the representation of Zero is not the same for each <code>PeanoN</code>. What we actually have -- and what is key here -- is that the representation for <code>x</code> of type <code>PeanoN</code> is the same as the representation of <code>Succesor(x)</code> of type <code>PeanoEncoder<PeanoN></code>, which implies that the machine representation of an integer <code>k</code> in the type <code>PeanoN</code> is <code>n-k</code>.</p>
<p>That detour being concluded, we refocus on <code>Peano255</code> for which we can write a very efficient conversion function</p>
<pre><code class="language-rust">impl Peano255 {
pub fn transmute_u8(x: u8) -> Self {
unsafe { std::mem::transmute(u8::MAX - x) }
}
}
</code></pre>
<p>Note that this function mere existence is very wrong and a sinful abomination to the eye of anything that is holy and maintainable. But provided you run the same compiler version as me on the very same architecture, you may be ok using it. Please don't use it.</p>
<p>In any case <code>transmute_u8</code> gets compiled to</p>
<pre><code>movl %edi, %eax
notb %al
retq
</code></pre>
<p>that is a simple function that applies a binary not to its argument register. And in most use cases, this function would actually be inlined and combined with operations above, making it run in less than one processor operation!</p>
<p>And because 255 is so small, we can exhaustively check that the behavior is correct for all values! Take that formal methods!</p>
<pre><code class="language-rust">for i in 0_u8..=u8::MAX {
let x = Peano255::transmute_u8(i);
if i % 8 == 0 {
print!("{:3} ", i)
} else if i % 8 == 4 {
print!(" ")
}
let c = if x.into_u8() == i { '✓' } else { '✗' };
print!("{}", c);
if i % 8 == 7 {
println!()
}
}
</code></pre>
<pre><code> 0 ✓✓✓✓ ✓✓✓✓
8 ✓✓✓✓ ✓✓✓✓
16 ✓✓✓✓ ✓✓✓✓
24 ✓✓✓✓ ✓✓✓✓
32 ✓✓✓✓ ✓✓✓✓
40 ✓✓✓✓ ✓✓✓✓
48 ✓✓✓✓ ✓✓✓✓
56 ✓✓✓✓ ✓✓✓✓
64 ✓✓✓✓ ✓✓✓✓
72 ✓✓✓✓ ✓✓✓✓
80 ✓✓✓✓ ✓✓✓✓
88 ✓✓✓✓ ✓✓✓✓
96 ✓✓✓✓ ✓✓✓✓
104 ✓✓✓✓ ✓✓✓✓
112 ✓✓✓✓ ✓✓✓✓
120 ✓✓✓✓ ✓✓✓✓
128 ✓✓✓✓ ✓✓✓✓
136 ✓✓✓✓ ✓✓✓✓
144 ✓✓✓✓ ✓✓✓✓
152 ✓✓✓✓ ✓✓✓✓
160 ✓✓✓✓ ✓✓✓✓
168 ✓✓✓✓ ✓✓✓✓
176 ✓✓✓✓ ✓✓✓✓
184 ✓✓✓✓ ✓✓✓✓
192 ✓✓✓✓ ✓✓✓✓
200 ✓✓✓✓ ✓✓✓✓
208 ✓✓✓✓ ✓✓✓✓
216 ✓✓✓✓ ✓✓✓✓
224 ✓✓✓✓ ✓✓✓✓
232 ✓✓✓✓ ✓✓✓✓
240 ✓✓✓✓ ✓✓✓✓
248 ✓✓✓✓ ✓✓✓✓
</code></pre>
<p>Isn't computer science fun?</p>
<p><em>Note:</em> The code for this blog post is available <a href="https://github.com/OCamlPro/PeaNoOp">here</a>.</p>
Statically guaranteeing security properties on Java bytecode: Paper presentation at VMCAI 23https://ocamlpro.com/blog/2023_01_12_vmcai_popl2023-01-12T08:12:13Z2023-01-12T08:12:13Z
Nicolas Berthier
We are excited to announce that Nicolas will present a paper at the International Conference on Verification, Model Checking, and Abstract Interpretation (VMCAI) the 16th and 17th of January. This year, VMCAI is co-located with the Symposium on Principles of Programming Languages (POPL) conference, ...<p></p>
<p>We are excited to announce that Nicolas will present a paper at the <a href="https://popl23.sigplan.org/home/VMCAI-2023">International Conference on Verification, Model Checking, and Abstract Interpretation (VMCAI)</a> the 16th and 17th of January.</p>
<p>This year, VMCAI is co-located with the <a href="https://popl23.sigplan.org/">Symposium on Principles of Programming Languages (POPL)</a> conference, which, as its name suggests, is a flagship conference in the Programming Languages domain.</p>
<p>What's more, for its 50th anniversary edition, POPL will return back where its first edition took place: Boston!
It is thus in the vicinity of the MIT and Harvard that we will meet with prominent figures of computer science research.</p>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/popl2023.jpg">
<img alt="This paper will be presented at VMCAI'2023, colocated with POPL'2023 at Boston!" src="/blog/assets/img/popl2023.jpg"/>
</a>
<div class="caption">
This paper will be presented at VMCAI'2023, colocated with POPL'2023 at Boston!
</div>
</p>
</div>
</p>
<!-- ## A sound technique to statically guarantee non-interference -->
<h2>A sound technique to statically guarantee security properties on Java bytecode</h2>
<p>Nicolas will be presenting a novel static program analysis technique dedicated to the discovery of information flows in Java bytecode.
By automatically discovering such flows, the new technique allows developers and users of Java libraries to assess key security properties on the software they run.</p>
<p>Two prominent examples of such properties are <em>confidentiality</em> (stating that no single bit of secret information may be inadvertently revealed by the software), and its dual, <em>integrity</em> (stating that no single bit of trusted information may be tampered via untrusted data).</p>
<p>The technique is proven <em>sound</em> (i.e. it cannot miss a flow of information), and achieves <em>state-of-the-art precision</em> (i.e. it does not raise too many false alarms) according to evaluations using the <a href="https://pp.ipd.kit.edu/uploads/publikationen/ifspec18nordsec.pdf">IFSpec benchmark suite</a>.</p>
<h2>Try it out!</h2>
<p>In addition to being supported by a proof, the technique has also been implemented in a tool called <a href="http://nberth.space/symmaries">Guardies</a>.</p>
<p>We believe this static analysis tool will naturally complement the taint tracking and dynamic analysis techniques that are usually employed to assess software security.</p>
<h2>Reading more about it</h2>
<p>You may already access the full paper <a href="https://arxiv.org/abs/2211.03450">here</a>.</p>
<p>Nicolas developed this contribution while working at the University of Liverpool, in collaboration with Narges Khakpour, herself from the University of Newcastle.</p>
Release of ocplib-simplex, version 0.5https://ocamlpro.com/blog/2022_11_25_ocplib-simplex-0.52023-01-05T08:12:13Z2023-01-05T08:12:13Z
Steven de Oliveira
Pierre Villemot
Hichem Rami Ait El Hara
Guillaume Bury
On last November, we released version 0.5 of ocplib-simplex, a generic library implementing the Simplex Algorithm in OCaml. It is a key component of the Alt-Ergo automatic theorem prover that we keep developing at OCamlPro. ** The Simplex Algorithm
What Changed in 0.5 ? ] The simplex algorithm The S...<p></p>
<p>On last November, we released <a href="https://opam.ocaml.org/packages/ocplib-simplex/">version
0.5</a> of
<a href="https://github.com/OCamlPro/ocplib-simplex">ocplib-simplex</a>, a generic library implementing the <a href="https://en.wikipedia.org/wiki/Simplex_algorithm">Simplex
Algorithm</a> in OCaml. It is a key component of the <a href="https://alt-ergo.ocamlpro.com">Alt-Ergo</a> automatic
theorem prover that we keep developing at OCamlPro.</p>
<p><div id="tableofcontents">
<strong>Table of contents</strong></p>
<ul>
<li><a href="#simplex">The Simplex Algorithm</a>
</li>
<li><a href="#changes">What Changed in 0.5 ?</a>
</div>
</li>
</ul>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/ocplib-simplex.jpg">
<img alt="Try ocplib-simplex before implementing
your own library !" src="/blog/assets/img/ocplib-simplex.jpg"/>
</a>
<div class="caption">
Try ocplib-simplex before implementing
your own library !
</div>
</p>
</div>
</p>
<h2>
<a id="simplex" class="anchor"></a><a class="anchor-link" href="#simplex">The simplex algorithm</a>
</h2>
<p>The <a href="https://en.wikipedia.org/wiki/Simplex_algorithm">Simplex Algorithm</a> is well known among linear optimization
enthusiasts. Let's say you own a manufacture producing two kinds of
chairs: the first is cheap, you make a small profit out of them but
they are quick to produce; the second one is a bit more fancy, you
make a bigger profit but they need a lot of time to build. You have a
limited amount of wood and time. How many cheap and fancy chairs
should you produce to optimize your profits?</p>
<p>You can represent this problem with a set of mathematical constraints (more
precisely, linear inequalities) which is precisely the scope of the simplex
algorithm. Given a set of linear inequalities, it computes a solution maximizing
a given value (in our example, the total profit).
If you are interested in the detail of the algorithm, you shoud definitely watch
<a href="https://www.youtube.com/watch?v=jh_kkR6m8H8">this video</a>.</p>
<p>The simplex algorithm is known to be a difficult problem in terms of
<a href="https://en.wikipedia.org/wiki/Computational_complexity">complexity</a>.
While the base algorithm is EXP-time, it is generally very efficient in
practice.</p>
<h2>
<a id="changes" class="anchor"></a><a class="anchor-link" href="#changes">What Changed in 0.5 ?</a>
</h2>
<p>Among the main changes in this new version of <a href="https://github.com/OCamlPro/ocplib-simplex">ocplib-simplex</a>:</p>
<ul>
<li>
<p>Make the library's API more generic and easier to use (see the <a href="https://github.com/OCamlPro/ocplib-simplex/blob/v0.5/tests/standalone_minimal.ml">System Solving Example</a> or the <a href="https://github.com/OCamlPro/ocplib-simplex/blob/v0.5/tests/standalone_minimal_maximization.ml">Linear Optimization Example</a>);</p>
</li>
<li>
<p>All the modules are better documentated in their <code>.mli</code> interfaces
(see
<a href="https://github.com/OCamlPro/ocplib-simplex/blob/v0.5/src/coreSig.mli">coreSig.mli</a>
for example);</p>
</li>
<li>
<p>the build system has been switched to <code>dune</code></p>
</li>
</ul>
<p>We hope that this work of simplification will help you to integrate
this library more easily in your projects!</p>
<p>If you want to follow this project, report an issue or contribute, you
can find it on <a href="https://github.com/OCamlPro/ocplib-simplex">GitHub</a>.</p>
<p>Please do not hesitate to contact us at OCamlPro:
<a href="mailto:alt-ergo@ocamlpro.com">alt-ergo@ocamlpro.com</a>.</p>
The Growth of the OCaml Distributionhttps://ocamlpro.com/blog/2023_01_02_ocaml_distribution2023-01-02T08:12:13Z2023-01-02T08:12:13Z
Fabrice Le Fessant
We recently worked on a project to build a binary installer for OCaml, inspired from RustUp for Rust. We had to build binary packages of the distribution for every OCaml version since 4.02.0, and we were surprised to discover that their (compressed) size grew from 18 MB to about 200 MB. This post gi...<p></p>
<p>We recently worked on a project to build a binary installer for OCaml,
inspired from <a href="https://rustup.rs">RustUp</a> for Rust. We had to build
binary packages of the distribution for every OCaml version since
4.02.0, and we were surprised to discover that their (compressed) size
grew from 18 MB to about 200 MB. This post gives a survey of our
findings.</p>
<p><div id="tableofcontents">
<strong>Table of contents</strong></p>
<ul>
<li><a href="#introduction">Introduction</a>
</li>
<li><a href="#trends">General Trends</a>
</li>
<li><a href="#changes">Causes and Consequences</a>
</li>
<li><a href="#distribution">Inside the OCaml Installation</a>
</li>
<li><a href="#conclusion">Conclusion</a>
</div>
</li>
</ul>
<h2>
<a id="introduction" class="anchor"></a><a class="anchor-link" href="#introduction">Introduction</a>
</h2>
<p>One of the strengths of Rust is the ease with which it gets installed
on a new computer in user space: with a simple command copy-pasted
from a website into a terminal, you get all what you need to start
building Rust projects in a few seconds. <a href="https://rustup.rs">Rustup</a>,
and a set of prebuilt packages for many architectures, is the project
that makes all this possible.</p>
<p>OCaml, on the other hand, is a bit harder to install: you need to find
in the documentation the proper way for your operating system to
install <code>opam</code>, find how to create a switch with a compiler version,
and then wait for the compiler to be built and installed. This usually
takes much more time.</p>
<p>As a winter holiday project, we worked on a project similar to Rustup,
providing binary packages for most OCaml distribution versions. It
builds upon our experience of <code>opam</code> and
<a href="https://ocamlpro.github.io/opam-bin/"><code>opam-bin</code></a>, our plugin to
build and share binary packages for <code>opam</code>.</p>
<p>While building binary packages for most versions of the OCaml
distribution, we were surprised to discover that the size of the
binary archive grew from 18 MB to about 200 MB in 10 years. Though on
many high-bandwidth connexions, it is not a problem, it might become
one when you go far from big towns (and fortunately, we designed our
tool to be able to install from sources in such a case, compromising
the download speed against the installation speed).</p>
<p>We decided it was worth trying to investigate this growth in more
details, and this post is about our early findings.</p>
<h2>
<a id="trends" class="anchor"></a><a class="anchor-link" href="#trends">General Trends</a>
</h2>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/ocaml-binary-growth-2022.svg">
<img alt="In 10 years, the OCaml Distribution binary archive grew by a factor 10, from 18 MB to 198 MB, corresponding to a growth from 73 MB to 522 MB after installation, and from 748 to 2433 installed files." src="/blog/assets/img/ocaml-binary-growth-2022.svg"/>
</a>
<div class="caption">
In 10 years, the OCaml Distribution binary archive grew by a factor 10, from 18 MB to 198 MB, corresponding to a growth from 73 MB to 522 MB after installation, and from 748 to 2433 installed files.
</div>
</p>
</div>
</p>
<p>So, let's have a look at the evolution of the size of the binary OCaml
distribution in more details. Between version 4.02.0 (Aug 2014) and
version 5.0.0 (Dec 2022):</p>
<ul>
<li>
<p>The size of the compressed binary archive grew from from 18 MB to 198 MB</p>
</li>
<li>
<p>The size of the installed binary distribution grew from 73 MB to 522 MB</p>
</li>
<li>
<p>The number of installed files grew from 748 to 2433</p>
</li>
</ul>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/ocaml-sources-growth-2022.svg">
<img alt="The OCaml Distribution source archive was much more stable, with a global growth smaller than 2." src="/blog/assets/img/ocaml-sources-growth-2022.svg"/>
</a>
<div class="caption">
The OCaml Distribution source archive was much more stable, with a global growth smaller than 2.
</div>
</p>
</div>
</p>
<p>On the other hand, the source distribution itself was much more stable:</p>
<ul>
<li>
<p>The size of the compressed source archive grew only from 3 MB to 5 MB</p>
</li>
<li>
<p>The size of the sources grew from 14 MB to 26 MB</p>
</li>
<li>
<p>The number of source files grew from 2355 to 4084</p>
</li>
</ul>
<p>For our project, this evolution makes the source distribution
a good alternative to binary distributions for low-bandwidth settings,
especially as OCaml is much faster than Rust at building itself. For
the record, version 5.0.0 takes about 1 minute to build on a 16-core
64GB-RAM computer.</p>
<p>Interestingly, if we plot the total size of the binary distribution,
and the total size with only files that were present in the previous
version, we can notice that the growth is mostly caused by the
increase in size of these existing files, and not by the addition of new
files:</p>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/ocaml-binary-size-2022.svg">
<img alt="The growth is
mostly caused by the increase in size of existing files, and not by
the addition of new files." src="/blog/assets/img/ocaml-binary-size-2022.svg"/>
</a>
<div class="caption">
The growth is
mostly caused by the increase in size of existing files, and not by
the addition of new files.
</div>
</p>
</div>
</p>
<h2>
<a id="changes" class="anchor"></a><a class="anchor-link" href="#changes">Causes and Consequences</a>
</h2>
<p>We tried to identify the main causes of this growth: the growth is
linear most of the time, with sharp increases (and decreases) at some
versions. We plotted the difference in size, for the total size, the
new files, the deleted files and the same files, i.e. the files that
made it from one version to the next one:</p>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/ocaml-binary-size-diff-2022.svg">
<img alt="The
difference of size between two versions is not big most of the time,
but some versions exhibit huge increases or decreases." src="/blog/assets/img/ocaml-binary-size-diff-2022.svg"/>
</a>
<div class="caption">
The
difference of size between two versions is not big most of the time,
but some versions exhibit huge increases or decreases.
</div>
</p>
</div>
</p>
<p>Let's have a look at the versions with the highest increases in size:</p>
<ul>
<li>
<p>+86 MB for 4.08.0: though there are a lot of new files (+307), they
only account for 3 MB of additionnal storage. Most of the difference
comes from an increase in size of both compiler libraries (probably
in relation with the use of Menhir for parsing) and of some binaries.
In particular:</p>
<ul>
<li>+13 MB for <code>bin/ocamlobjinfo.byte</code> (2_386_046 -> 16_907_776)
</li>
<li>+12 MB for <code>bin/ocamldep.byte</code> (2_199_409 -> 15_541_022)
</li>
<li>+6 MB for <code>bin/ocamldebug</code> (1_092_173 -> 7_671_300)
</li>
<li>+6 MB for <code>bin/ocamlprof.byte</code> (630_989 -> 7_043_717)
</li>
<li>+6 MB for <code>lib/ocaml/compiler-libs/parser.cmt</code> (2_237_513 -> 9_209_256)
</li>
</ul>
</li>
<li>
<p>+74 MB for 4.03.0: again, though there are a lot of new files (+475,
mostly in <code>compiler-libs</code>), they only account for 11 MB of
additionnal storage, and a large part is compensated by the removal
of <code>ocamlbuild</code> from the distribution, causing a gain of 7 MB.</p>
<p>Indeed, most the increase in size is probably caused by the compilation with
debug information (option <code>-g</code>), that increases considerably the size of
all executables, for example:</p>
<ul>
<li>+12 MB for <code>bin/ocamlopt</code> (2_016_697 -> 15_046_969)
</li>
<li>+9 MB for <code>bin/ocaml</code> (1_833_357 -> 11_574_555)
</li>
<li>+8 MB for <code>bin/ocamlc</code> (1_748_717 -> 11_070_933)
</li>
<li>+8 MB for <code>lib/ocaml/expunge</code> (1_662_786 -> 10_672_805)
</li>
<li>+7 MB for <code>lib/ocaml/compiler-libs/ocamlcommon.cma</code> (1_713_947 -> 8_948_807)
</li>
</ul>
</li>
<li>
<p>+72 MB for 4.11.0: again, the increase almost only comes from
existing files. For example:</p>
<ul>
<li>+16 MB for <code>bin/ocamldebug</code> (8_170_424 -> 26_451_049)
</li>
<li>+6 MB for <code>bin/ocamlopt.byte</code> (21_895_130 -> 28_354_131)
</li>
<li>+5 MB for <code>lib/ocaml/extract_crc</code> (659_967 -> 6_203_791)
</li>
<li>+5 MB for <code>bin/ocaml</code> (17_074_577 -> 22_388_774)
</li>
<li>+5 MB for <code>bin/ocamlobjinfo.byte</code> (17_224_939 -> 22_523_686)
</li>
</ul>
<p>Again, the increase is probably related to adding more debug information in
the executable (there is a specific PR on <code>ocamldebug</code> for that, and for all
executables more debug info is available for each allocation);</p>
</li>
<li>
<p>+48 MB for 5.0.0: a big difference in storage is not surprising for
a change in a major version, but actually half of the difference
just comes from an increase of 23 MB of <code>bin/ocamldoc</code>;</p>
</li>
<li>
<p>+34 MB for 4.02.3: this one is worth noting, as it comes at a minor
version change. The increase is mostly caused by the addition of 402
new files, corresponding to <code>cmt/cmti</code> files for the <code>stdlib</code> and
<code>compiler-libs</code></p>
</li>
</ul>
<p>We could of course study some other versions, but understanding the
root causes of most of these changes would require to go deeper than
what we can in such a blog post. Yet, these figures give good hints
for experts on which versions to start investigating with.</p>
<h2>
<a id="distribution" class="anchor"></a><a class="anchor-link" href="#distribution">Inside the OCaml Installation</a>
</h2>
<p>Before concluding, it might also be worth studying which parts of the
OCaml Installation take most of the space. 5.0.0 is a good candidate
for such a study, as libraries have been moved to separate
directories, instead of all being directly stored in <code>lib/ocaml</code>.</p>
<p>Here is a decomposition of the OCaml Installation:</p>
<ul>
<li>Total: 529 MB
<ul>
<li><code>share</code>: 1 MB
</li>
<li><code>man</code>: 4 MB
</li>
<li><code>bin</code>: 303 MB
</li>
<li><code>lib/ocaml</code>: 223 MB
<ul>
<li><code>compiler-libs</code>: 134 MB
</li>
<li><code>expunge</code>: 20 MB
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p>As we can see, a large majority of the space is used by
executables. For example, all these ones are above 10 MB:</p>
<ul>
<li>28 MB <code>ocamldoc</code>
</li>
<li>26 MB <code>ocamlopt.byte</code>
</li>
<li>25 MB <code>ocamldebug</code>
</li>
<li>21 MB <code>ocamlobjinfo.byte</code>, <code>ocaml</code>
</li>
<li>20 MB <code>ocamldep.byte</code>, <code>ocamlc.byte</code>
</li>
<li>19 MB <code>ocamldoc.opt</code>
</li>
<li>18 MB <code>ocamlopt.opt</code>
</li>
<li>15 MB <code>ocamlobjinfo.opt</code>
</li>
<li>14 MB <code>ocamldep.opt</code>, <code>ocamlc.opt</code>, <code>ocamlcmt</code>
</li>
</ul>
<p>There are both bytecode and native code executables in this list.</p>
<h2>
<a id="conclusion" class="anchor"></a><a class="anchor-link" href="#conclusion">Conclusion</a>
</h2>
<p>Our installer project would benefit from having a smaller binary OCaml
distribution, but most OCaml users in general would also benefit from
that: after a few years of using OCaml, OCaml developers usually end
up with huge <code>$HOME/.opam</code> directories, because every <code>opam</code> switch
often takes more than 1 GB of space, and the OCaml distribution takes
a big part of that. <code>opam-bin</code> partially solves this problem by
sharing equal files between several switches (when the
<code>--enable-share</code> configuration option has been used).</p>
<p>Here is a short list of ideas to test to decrease the size of the
binary OCaml distribution:</p>
<ul>
<li>
<p>Use the same executable for multiple programs (<code>ocamlc.opt</code>,
<code>ocamlopt.opt</code>, <code>ocamldep.opt</code>, etc.), using the first command
argument to choose the behavior to have. Rustup, for example, only
installs one binary in <code>$HOME/.cargo/bin</code> for <code>cargo</code>, <code>rustc</code>,
<code>rustup</code>, etc. and actually, our tool does the same trick to share
the same binary for itself, <code>opam</code>, <code>opam-bin</code>, <code>ocp-indent</code> and
<code>drom</code>.</p>
</li>
<li>
<p>Split installed files into separate <code>opam</code> packages, of which only
one would be installed as the compiler distribution. For example,
most <code>cmt</code> files of <code>compiler-libs</code> are not needed by most users,
they might only be useful for compiler/tooling developers, and even
then, only in very rare cases. They could be installed as another
<code>opam</code> package.</p>
</li>
<li>
<p>Remove the <code>-linkall</code> flag on <code>ocamlcommon.cm[x]a</code> libraries. In
general, such a flag should only be set when building an executable
that is expected to use plugins, because otherwise, this executable
will contain all the modules of the library, even the ones that are
not useful for its specific purpose.</p>
</li>
</ul>
WebAssembly/Wasm and OCamlhttps://ocamlpro.com/blog/2022_12_14_wasm_and_ocaml2022-12-14T08:12:13Z2022-12-14T08:12:13Z
Léo Andrès
Pierre Chambart
In this first post about WebAssembly (Wasm) and OCaml, we introduce the work we have been doing for quite some time now, though without publicity, about our participation in the Garbage-Collection (GC) Working Group for Wasm, and two related development projects in OCaml. WebAssembly, a fast and por...<p></p>
<div class="figure">
<p>
<img alt="" src="/blog/assets/img/dalle_dragon_camel.png"/>
<div class="caption">
The Dragon-Camel is raging at the sight of all the challenges we overcome!
</div>
</p>
</div>
<p>In this first post about <a href="https://webassembly.org/">WebAssembly</a> (Wasm) and OCaml, we introduce
the work we have been doing for quite some time now, though without
publicity, about our participation in the Garbage-Collection (GC)
Working Group for Wasm, and two related development projects in OCaml.</p>
<h2>WebAssembly, a fast and portable bytecode</h2>
<blockquote>
<p>WebAssembly is a low-level, binary format that allows compiled code
to run efficiently in the browser. Its roadmap is decided by Working
Groups from multiple organizations and companies, including
Microsoft, Google, and Mozilla. These groups meet regularly to
discuss and plan the development of WebAssembly, with the broader
community of developers, academics, and other interested parties to
gather feedback and ideas for the future of WebAssembly.</p>
</blockquote>
<p>There are multiple projects in OCaml related to Wasm, notably
<a href="https://github.com/remixlabs/wasicaml">Wasicaml</a>, a production-ready port of the OCaml bytecode interpreter
to Wasm . However, these projects don't tackle the domain we would
like to address, and for good reasons: they target the <strong>existing</strong>
version of Wasm, which is basically a very simple programming language
with no data structures, but with an access to a large memory
array. Almost anything can of course be compiled to something like
that, but there is a big restriction: the resulting program can
interact with the outside world only through the aforementioned memory buffer.
This is perfectly fine if you write Command-Line Interface (CLI) tools,
or workers to be deployed in a Content Delivery Network (CDN). However,
this kind of interaction can become quite tedious if you need to deal
with abstract objects provided by your environment, for example DOM
objects in a browser to manipulate webpages. In such cases, you will
need to write some wrapper access functions in JavaScript (or OCaml
with <code>js_of_ocaml</code> of course), and you will have to be very careful
about the lifetime of those objects to avoid memory leaks.</p>
<p>Hence the shiny new proposals to extend Wasm with various useful
features that can be very convenient for OCaml. In particular, three
extensions crucially matter to us, functional programmers: the
<a href="https://github.com/WebAssembly/gc/blob/main/proposals/gc/MVP.md">Garbage Collection</a>, <a href="https://github.com/WebAssembly/exception-handling/blob/main/proposals/exception-handling/Exceptions.md">Exceptions</a> and <a href="https://github.com/WebAssembly/tail-call/blob/main/proposals/tail-call/Overview.md">Tail-Call</a> proposals.</p>
<h2>Our involvement in the GC-related Working Group</h2>
<p>The Wasm committee has already worked on these proposals for a few
years, and the Exceptions and Tail-Call proposals are now quite
satisfying. However, this is not yet the case for the GC proposal. Indeed,
finding a good API for a GC that is compatible with all the languages
in the wild, that can be implemented efficiently, and can be used to
run a program you don't trust, is all but an easy task.
Multiple attempts by strong teams, for different virtual machines, have
exposed limitations of past proposals. But, we must now admit that the
current proposal has reached a state where it is quite impressive,
being both simple <strong>and</strong> generic.</p>
<p>The proposal is now getting close to a feature freeze status. Thanks
to the hard work of many people on the committee, including us, the
particularities of functional typed languages were not forgotten in
the design, and we are convinced that there should be no problem for
OCaml. Now is the time to test it for real!</p>
<h2>Targetting Wasm from the OCaml Compiler</h2>
<p>Adding a brand new backend to a compiler to target something that is
quite different from your usual assembly can be a huge work, and only
a few language developers actively work on making a prototype for
Wasm+GC. Yet, we think that it is important for the committee, to have
as many examples as possible to validate the proposal and move it to
the next step.</p>
<p>That's the reason why we decided to contribute to the proposal, by
prototyping a backend for Wasm to the OCaml compiler.</p>
<h2>Our experimental Wasm interpreter in OCaml</h2>
<p>In parallel, we are also working on the development of our own Wasm
Virtual Machine in OCaml, to be able to easily experiment both on the
OCaml side and Wasm side, while waiting for most official Wasm VM to
fully implement the new proposals.</p>
<p>These experimental projects and related discussions are very important
design steps, although obviously far from production-ready status.</p>
<p>As our current work focuses on OCaml 4.14, effect handlers are left for
future work. The current <a href="https://github.com/WebAssembly/stack-switching/blob/main/proposals/stack-switching/Overview.md">proposal</a> that would make it possible to
compile effect handlers to Wasm nicely is still in its earlier stages.
We hope to be able to prototype it too on our Wasm VM.</p>
<p>Note that we are looking for sponsors to fund this work. If supporting
Wasm in OCaml may impact your business, you can contact us to discuss
how we can use your help!</p>
<p>Our next blog post in January will provide more technical details on
our two prototyping efforts.</p>
Alt-Ergo: the SMT solver with model generationhttps://ocamlpro.com/blog/2022_11_16_alt-ergo-models2022-11-16T08:12:13Z2022-11-16T08:12:13Z
Steven de Oliveira
Pierre Villemot
Hichem Rami Ait El Hara
Guillaume Bury
The Alt-Ergo automatic theorem prover developed at OCamlPro has just been released with a major update : counterexample model can now be generated. This is now available on the next branch, and will officially be part of the 2.5.0 release, coming this year ! Alt-Ergo at a Glance Alt-Ergo is an open ...<p>The Alt-Ergo automatic theorem prover developed at OCamlPro has just been released with a major update : counterexample model can now be generated. This is now available on the next branch, and will officially be part of the 2.5.0 release, coming this year !</p>
<h3>Alt-Ergo at a Glance</h3>
<p><a href="https://alt-ergo.ocamlpro.com">Alt-Ergo</a> is an open source automatic theorem prover based on the <a href="https://en.wikipedia.org/wiki/Satisfiability_Modulo_Theories">SMT</a> technology. It was born at the <a href="https://www.lri.fr">Laboratoire de Recherche en Informatique</a>, <a href="https://www.inria.fr/centre/saclay">Inria Saclay Ile-de-France</a> and <a href="https://www.cnrs.fr/index.php">CNRS</a> in 2006 and has been maintained and developed by OCamlPro since 2013.</p>
<p></p>
<p>It is capable of reasoning in a combination of several built-in theories such as:</p>
<ul>
<li>uninterpreted equality;
</li>
<li>integer and rational arithmetic;
</li>
<li>arrays;
</li>
<li>records;
</li>
<li>algebraic data types;
</li>
<li>bit vectors.
</li>
</ul>
<p>It also is able to deal with commutative and associative operators, quantified formulas and has a polymorphic first-order native input language.
Alt-Ergo is written in <a href="https://caml.inria.fr/ocaml/index.fr.html">OCaml</a>. Its core has been formally proved in the <a href="https://coq.inria.fr">Coq proof assistant</a>.</p>
<p>Alt-Ergo has been involved in a qualification process (DO-178C) by <a href="http://www.airbus.com">Airbus Industrie</a>. During this process, a qualification kit has been produced. It was composed of a technical document with tool requirements (TR) that gives a precise description of each part of the prover, a companion document (~ 450 pages) of tests, and an instrumented version of the tool with a TR trace mechanism.</p>
<h3>Model Generation</h3>
<p>When a property is false, generating a counterexample is a key that many state-of-the-art SMT-solvers should include by default. However, this is a complex problem in the first place.</p>
<p>The first obstacle is the decidability of the theories manipulated by the SMT solvers. In general, the complexity class (i.e. the classification of algorithmic problems) is between "NP-Hard" (for the linear arithmetic theory on integers for example) and "Undecidable" (for the polynomial arithmetic on integers for example). Then comes the quantified properties, i.e. properties prefixed with <code>forall</code>s and <code>exists</code>, adding an additional layer of complexity and undecidability. Another challenge was the core algorithm behind Alt-Ergo which does not natively support model generation. At last, an implementation of the models have to take care of Alt-Ergo's support of polymorphism.</p>
<h3>How to use Model Generation in Alt-Ergo</h3>
<p>There are two ways to activate model generation on Alt-Ergo.</p>
<ul>
<li>Basic usage: simply add the option <code>--model</code> to your command (<code>$ alt-ergo file --model</code>)
</li>
<li>Advanced usage: three options mainly impact the model generation.
<ul>
<li>
<p><code>--interpretation</code>: sets the model generation strategy. It can either be
<code>none</code> for no model generation; <code>first</code> for generating the very first
interpretation computed only; <code>every</code> for generating a
model after each decision and <code>last</code> only generating a model when <code>alt-ergo</code>
concludes on the formula satisfiability.</p>
</li>
<li>
<p><code>--sat-solver</code>: only the 'Tableaux-CDCL' sat solver is compatible with the
interpretation feature</p>
</li>
<li>
<p><code>--instantiation-heuristic</code>: when set to <code>normal</code>, <code>alt-ergo</code> generates model
faster. This is an experimental feature that sometimes generates incorrect
models.</p>
<p>Example:</p>
<p><code>$ alt-ergo file --interpretation every --sat-solver Tableaux-CDCL --instantiation-heuristic auto</code></p>
</li>
</ul>
</li>
</ul>
<p><em>Warning:</em> only the linear arithmetic and the enum model generation have been
tested. Other theories are either not implemented (ADTs) or experimental (risk
of crash or unsound models). We are currently still heavily testing the
feature, so feel free to join us on
<a href="github.com/OcamlPro/alt-ergo">Alt-Ergo's GitHub repository</a> if you have
questions or issues with this new feature.
Note that the models generated are best-effort models; Alt-Ergo
does not answer <code>Sat</code> when it outputs a model. In a future version, we will add
a mechanism that automatically checks the model generated.</p>
<p>Godspeed!</p>
<h3>Acknowledgements</h3>
<p>We want to thank David Mentré and Denis Cousineau at <a href="https://www.mitsubishielectric-rce.eu/merce-in-france/">Mitsubishi Electric R&D Center Europe</a>
for funding the initial work on counterexample.</p>
<p>Note that MERCE has been a Member of the Alt-Ergo Users’ Club for 3 years.
This partnership allowed Alt-Ergo to evolve and we hope that more users
will join the Club on our journey to make Alt-Ergo a must-have tool.
Please do not hesitate to contact the Alt-Ergo team at OCamlPro:
<a href="mailto:alt-ergo@ocamlpro.com">alt-ergo@ocamlpro.com</a>.</p>
Let's Encrypt Wildcard Certificates Made Easy with Agnoshttps://ocamlpro.com/blog/2022_10_05_agnos_0.1.0-beta2022-10-05T08:12:13Z2022-10-05T08:12:13Z
Arthur Carcano
OCamlPro
It is with great pleasure that we announce the first beta release of Agnos. A former personal project of our new recruit, Arthur, Agnos development is now hosted at and sponsored by OCamlPro's Rust division, Red Iron. A white lamb with a blue padlock and blue stars. He is clearly to be trusted with ...<p></p>
<p>It is with great pleasure that we announce the first beta release of <a href="https://github.com/krtab/agnos">Agnos</a>. A former personal project of our new recruit, Arthur, Agnos development is now hosted at and sponsored by OCamlPro's Rust division, <a href="https://red-iron.eu/">Red Iron</a>.</p>
<p><img src="/blog/assets/img/agnos-banner.png" alt="A white lamb with a blue padlock and blue stars. He is clearly to be trusted with your certificate needs. A text reads: Agnos, wildcard Let's Encrypt certificates, no DNS-provider API required." /></p>
<blockquote>
<p><strong>TL;DR:</strong>
If you are familiar with ACME providers like Let's Encrypt, DNS-01 and the challenges relating to wildcard certificates, simply know that Agnos touts itself as a single-binary, API-less, provider-agnostic dns-01 client, allowing you to easily obtain wildcard certificates without having to interface with your DNS provider. To do so, it offers a user-friendly configuration and answers Let's Encrypt DNS-01 challenges on its own, bypassing the need for API calls to edit DNS zones. You may want to jump to the last <a href="#agnos-as-the-best-of-both-worlds">section</a> of this post, or directly join us on Agnos's <a href="https://github.com/krtab/agnos">github</a>.</p>
</blockquote>
<p>Agnos was born from the observation that even though wildcard certificates are in many cases more convenient and useful than their fully qualified counterparts, they are not often used in practice. As of today it is not uncommon to see certificates with multiple <a href="https://en.wikipedia.org/wiki/Subject_Alternative_Name">Subject Alternate Names</a> (SAN) for multiple subdomains, which can become <a href="https://discuss.httparchive.org/t/san-certificates-how-many-alt-names-are-too-many/1867">problematic</a> and weaken infrastructure. If some situations indeed require to forego wildcard certificates, this choice is too often still a default one.</p>
<p>At OCamlPro, we believe that technical difficulties should not stand in the way of optimal decision making, and that compromises should only be made in the face of unsolvable challenges. By releasing this first beta of Agnos, we hope that your feedback we'll help us build a tool truly useful to the community and that together, we can open a path towards seamless wildcard certificate issuance – tossing away issues and pain-points previously encountered as a thing of the past.</p>
<p>This blog post describes the different ACME challenges, why DNS providers API have so far been hindering DNS-01 adoption, and how Agnos solves this issue. If you are already curious and want to run some code, let's meet on Agnos's <a href="https://github.com/krtab/agnos">github</a></p>
<h2>Let's encrypt's mechanism and ACME challenges</h2>
<p>The Automatic Certificate Management Environment (ACME) is the protocol behind automated certificate authority services like Let's Encrypt. At its core, this protocol requires the client asking for a certificate to provide evidence that they control a resource by having said resource display some authority-determined token.</p>
<p>The easiest way to do so is to serve a file on a web-server. For example serving a file containing the token at <code>my-domain.example</code> would prove that I control the web-server that the <strong>fully qualified domain name</strong> <code>my-domain.example</code> is pointing to. This, under normal circumstances proves that I somewhat control this fully qualified domain. This process is illustrated below.</p>
<p>The ACME client initiates the certificate issuance process and is challenged to serve the token via HTTP at the domain address. The ACME client and HTTP server can be and often are on the same machine. The token can be quickly provisioned, and the ACME client can ask the ACME server to validate the challenge and issue the certificate.</p>
<p><img src="/blog/assets/img/http-01-schema.png" alt="Schematic illustration of the HTTP-01 challenge." /></p>
<p>However, demonstrating that one controls an HTTP server pointed to by <code>my-domain.example</code> is not deemed enough by Let's Encrypt to demonstrate <strong>full</strong> control of the <code>my-domain.example</code> domain and all its subdomains. Hence, the user cannot be issued a wildcard certificate through this method.</p>
<p>To obtain a wildcard certificate, one must rely on the DNS-01 type of challenge, illustrated below. The ACME client initiates the certificate issuance process and is challenged to serve the token via a DNS TXT record. Because DNS management is often delegated to a DNS provider, the DNS server is rarely on the same machine, and the token must be provisioned via a call to the DNS provider API, if there is any. Moreover, DNS providers virtually always use multiple servers, and the new record must be propagated to all of them. The ACME client must then wait and check for the propagation to be finished before asking the ACME server to validate the challenge and issue the certificate.</p>
<p><img src="/blog/assets/img/dns-01-schema.png" alt="Schematic illustration of the DNS-01 challenge." /></p>
<p>The pros and cons of each of these two challenge type are summarized by Let's Encrypt's <a href="https://letsencrypt.org/docs/challenge-types/">documentation</a> as follow:</p>
<blockquote>
<h4>HTTP-01</h4>
<h5>Pros</h5>
<ul>
<li>It’s easy to automate without extra knowledge about a domain’s configuration.
</li>
<li>It allows hosting providers to issue certificates for domains CNAMEd to them.
</li>
<li>It works with off-the-shelf web servers.
</li>
</ul>
<h5>Cons</h5>
<ul>
<li>It doesn’t work if your ISP blocks port 80 (this is rare, but some residential ISPs do this).
</li>
<li>Let’s Encrypt doesn’t let you use this challenge to issue wildcard certificates.
</li>
<li>If you have multiple web servers, you have to make sure the file is available on all of them.
</li>
</ul>
<h4>DNS-01</h4>
<h5>Pros</h5>
<ul>
<li>You can use this challenge to issue certificates containing wildcard domain names.
</li>
<li>It works well even if you have multiple web servers.
</li>
</ul>
<h5>Cons</h5>
<ul>
<li>Keeping API credentials on your web server is risky.
</li>
<li>Your DNS provider might not offer an API.
</li>
<li>Your DNS API may not provide information on propagation times.
</li>
</ul>
</blockquote>
<h2>Agnos as the best of both worlds</h2>
<p>By using NS records to delegate the DNS-01 challenge to Agnos itself, we can virtually remove all of DNS-01 cons. Indeed by serving its own DNS answers, Agnos:</p>
<ul>
<li>Nullifies the need for API and API credentials
</li>
<li>Nullifies all concerns regarding propagation times
</li>
</ul>
<p>In more details, Agnos proceeds as follows (and as illustrated below). Before any ACME transaction takes place (and only once), the ACME client user manually updates their DNS zone to delegate ACME specific subdomains to Agnos. Note that the rest of DNS functionality is still assumed by the DNS provider. To carry out the ACME transaction, the ACME client initiates the certificate issuance process and is challenged to serve the token via a DNS TXT record. Agnos does so using its own DNS functionality (leveraging <a href="https://trust-dns.org/">Trust-dns</a>). The ACME client can immediately ask the ACME server for validation. The ACME server asks the DNS provider for the TXT record and is replied to that the ACME specific subdomain is delegated to Agnos. The ACME server then asks Agnos-as-a-DNS-server for the TXT record which Agnos provides. Finally the certificate is issued and stored by Agnos on the client machine.</p>
<p><img src="/blog/assets/img/dns-01-agnos-schema.png" alt="Schematic illustration of the DNS-01 challenge when using Agnos." /></p>
<h2>Taking Agnos for a ride</h2>
<p>In conclusion, we hope that by switching to Agnos, or more generally to provider-agnostic DNS-01 challenge solving, individuals and organizations will benefit from the full power of DNS-01 and wildcard certificates, without having to take API-related concerns into account when choosing their DNS provider.</p>
<p>If this post has piqued your interest and you want to help us develop Agnos further by trying the beta out, let's meet on our <a href="https://github.com/krtab/agnos">github</a>. We would very much appreciate any feedback and bug reports, so we tried our best to streamline and well document the installation process to facilitate new users.
On ArchLinux for example, getting started can be as easy as:</p>
<p>Adding two records to your DNS zone using your provider web GUI:</p>
<pre><code>agnos-ns.doma.in A 1.2.3.4
_acme-challenge.doma.in NS agnos-ns.doma.in
</code></pre>
<p>and running on your server</p>
<pre><code class="language-bash"># Install the agnos binary
yay -S agnos
# Allow agnos to bind the priviledge 53 port
sudo setcap 'cap_net_bind_service=+ep' /usr/bin/agnos
# Download the example configuration file
curl 'https://raw.githubusercontent.com/krtab/agnos/v.0.1.0-beta.1/config_example.toml' > agnos_config.toml
# Edit it to suit your need
vim agnos_config.toml
# Launch agnos 🚀
agnos agnos_config.toml
</code></pre>
<p>Until then, happy hacking!</p>
opam 2.1.3 is released!https://ocamlpro.com/blog/2022_08_12_opam_2.1.3_release2022-08-12T08:12:13Z2022-08-12T08:12:13Z
Raja Boujbel
OCamlPro
Feedback on this post is welcomed on Discuss! We are pleased to announce the minor release of opam 2.1.3. This opam release consists of backported fixes: Fix opam init and opam init --reinit when the jobs variable has been set in the opamrc or the current config. (#5056)
opam var no longer fails if ...<p><em>Feedback on this post is welcomed on <a href="https://discuss.ocaml.org/t/ann-opam-2-1-3/10299">Discuss</a>!</em></p>
<p>We are pleased to announce the minor release of <a href="https://github.com/ocaml/opam/releases/tag/2.1.3">opam 2.1.3</a>.</p>
<p>This opam release consists of <a href="https://github.com/ocaml/opam/issues/5000">backported</a> fixes:</p>
<ul>
<li>Fix <code>opam init</code> and <code>opam init --reinit</code> when the <code>jobs</code> variable has been set in the opamrc or the current config. (<a href="https://github.com/ocaml/opam/issues/5056">#5056</a>)
</li>
<li><code>opam var</code> no longer fails if no switch is set (<a href="https://github.com/ocaml/opam/issues/5025">#5025</a>)
</li>
<li>Setting a variable with option <code>--switch <sw></code> fails instead of writing an invalid <code>switch-config</code> file (<a href="https://github.com/ocaml/opam/issues/5027">#5027</a>)
</li>
<li>Handle external dependencies when updating switch state pin status (all pins), instead as a post pin action (only when called with <code>opam pin</code> (<a href="https://github.com/ocaml/opam/issues/5046">#5046</a>)
</li>
<li>Remove windows double printing on commands and their output (<a href="https://github.com/ocaml/opam/issues/4940">#4940</a>)
</li>
<li>Stop Zypper from upgrading packages on updates on OpenSUSE (<a href="https://github.com/ocaml/opam/issues/4978">#4978</a>)
</li>
<li>Clearer error message if a command doesn't exist (<a href="https://github.com/ocaml/opam/issues/4112">#4112</a>)
</li>
<li>Actually allow multiple state caches to co-exist (<a href="https://github.com/ocaml/opam/issues/4554">#4554</a>)
</li>
<li>Fix some empty conflict explanations (<a href="https://github.com/ocaml/opam/issues/4373">#4373</a>)
</li>
<li>Fix an internal error on admin repository upgrade from OPAM 1.2 (<a href="https://github.com/ocaml/opam/issues/4965">#4965</a>)
</li>
</ul>
<p>and improvements:</p>
<ul>
<li>When inferring a 2.1+ switch invariant from 2.0 base packages, don't filter out pinned packages as that causes very wide invariants for pinned compiler packages (<a href="https://github.com/ocaml/opam/issues/4501">#4501</a>)
</li>
<li>Some optimisations to <code>opam list --installable</code> queries combined with other filters (<a href="https://github.com/ocaml/opam/issues/4311">#4311</a>)
</li>
<li>Improve performance of some opam list combinations (e.g. <code>--available</code>, <code>--installable</code>) (<a href="https://github.com/ocaml/opam/issues/4999">#4999</a>)
</li>
<li>Improve performance of <code>opam list --conflicts-with</code> when combined with other filters (<a href="https://github.com/ocaml/opam/issues/4999">#4999</a>)
</li>
<li>Improve performance of <code>opam show</code> by as much as 300% when the package to show is given explicitly or is unique (<a href="https://github.com/ocaml/opam/issues/4997">#4997</a>)(<a href="https://github.com/ocaml/opam/issues/4172">#4172</a>)
</li>
<li>When a field is defined in switch and global scope, try to determine the scope also by checking switch selection (<a href="https://github.com/ocaml/opam/issues/5027">#5027</a>)
</li>
</ul>
<p>You can also find API changes in the <a href="https://github.com/ocaml/opam/releases/tag/2.1.3">release note</a>.</p>
<hr />
<p>Opam installation instructions (unchanged):</p>
<ol>
<li>
<p>From binaries: run</p>
<pre><code class="language-shell-session">$ bash -c "sh <(curl -fsSL https://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh) --version 2.1.3"
</code></pre>
<p>or download manually from <a href="https://github.com/ocaml/opam/releases/tag/2.1.3">the Github "Releases" page</a> to your PATH. In this case, don't forget to run <code>opam init --reinit -ni</code> to enable sandboxing if you had version 2.0.0~rc manually installed or to update you sandbox script.</p>
</li>
<li>
<p>From source, using opam:</p>
<pre><code class="language-shell-session">$ opam update; opam install opam-devel
</code></pre>
<p>(then copy the opam binary to your PATH as explained, and don't forget to run <code>opam init --reinit -ni</code> to enable sandboxing if you had version 2.0.0~rc manually installed or to update your sandbox script)</p>
</li>
<li>
<p>From source, manually: see the instructions in the <a href="https://github.com/ocaml/opam/tree/2.1.3#compiling-this-repo">README</a>.</p>
</li>
</ol>
<p>We hope you enjoy this new minor version, and remain open to <a href="https://github.com/ocaml/opam/issues">bug reports</a> and <a href="https://github.com/ocaml/opam/issues">suggestions</a>.</p>
OCamlPro at the JFLA2022 Conferencehttps://ocamlpro.com/blog/2022_07_12_ocamlpro_at_the_jfla20222022-07-12T08:12:13Z2022-07-12T08:12:13Z
OCamlPro
Dario Pinto
In today's article, we share our contributions to the 2022 JFLAs, the French-Speaking annual gathering on Application Programming Languages, mainly Functional Languages such as OCaml (Journées Francophones des Langages Applicatifs). This much awaited event is organised by Inria, the French National...<p></p>
<div class="figure">
<p>
<img alt="" src="/blog/assets/img/picture_jfla2022_domaine_essendieras.jpg"/>
<div class="caption">
<a href="https://www.essendieras.fr/" target="_blank">
Domaine d'Essendiéras
</a>, located in French Region <em>Perigord</em>, where the JFLA2022 took place!
</div>
</p>
</div>
<p>In today's article, we share our contributions to the 2022 <a href="http://jfla.inria.fr/">JFLA</a>s, the French-Speaking annual gathering on Application Programming Languages, mainly Functional Languages such as OCaml (<em>Journées Francophones des Langages Applicatifs</em>).</p>
<p>This much awaited event is organised by <a href="https://www.inria.fr/fr">Inria</a>, the French National Institute for Research in Science and Digital Technologies.</p>
<p>This is always the best occasion for us to directly share our latest works and contributions with this diverse community of researchers, professors, students and industrial actors alike. Moreover, it allows us to meet up with all our long known peers and get in contact with an ever changing pool of actors in the fields of functional languages in general, formal methods and everything OCaml!</p>
<p>This year the three papers we submitted were selected, and this is what this article is about!</p>
<p><div id="tableofcontents">
<strong>Table of contents</strong></p>
<p><a href="#mikino">Mikino, formal verification made accessible</a></p>
<p><a href="#SWH">Connecting Software Heritage with the OCaml ecosystem</a></p>
<p><a href="#alt-ergo">Alt-Ergo-Fuzz, hunting the bugs of the bug hunter</a>
</div></p>
<h2>
<a id="mikino" class="anchor"></a><a class="anchor-link" href="#mikino">Mikino, formal verification made accessible</a>
</h2>
<p><em>Mikino and all correlated content mentionned in this article was made by Adrien Champion</em></p>
<p>If you follow our Blog, you may have already read our <a href="https://ocamlpro.com/blog/2021_10_14_verification_for_dummies_smt_and_induction">Mikino blogpost</a>, but if you haven't here's a quick breakdown and a few pointers... In case you wish to play around or maybe contribute to the project. ;)</p>
<p>So what is Mikino ?</p>
<blockquote>
<p>Mikino is a simple induction engine over transition systems. It is written in Rust, with a strong focus on ergonomics and user-friendliness.</p>
</blockquote>
<p>Depending on what your needs are, you could either be interested in the <a href="https://crates.io/crates/mikino_api">Mikino Api</a> or the <a href="https://crates.io/crates/mikino">Mikino Binary</a> or just, for purely theoretical reasons, want to undergo our <a href="https://ocamlpro.github.io/verification_for_dummies/">Verification for Dummies: SMT and Induction</a> tutorial which is specifically tailored to appeal to the newbies of formal verification!</p>
<p>Have a go at it, learn and have fun!</p>
<p>For further reading: <a href="https://hal.inria.fr/hal-03626850/">OCamlPro's paper for the JFLA2022 (Mikino) </a> (French-written article describing the entire project).</p>
<h2>
<a id="SWH" class="anchor"></a><a class="anchor-link" href="#SWH">Connecting Software Heritage with the OCaml ecosystem</a>
</h2>
<p><em>The archiving of OCaml packages into the SWH architecture, the release of <a href="https://github.com/OCamlPro/swhid/">swhid</a> library and the integration of SWH into opam was done by Léo Andrès, Raja Boujbel, Louis Gesbert and Dario Pinto</em></p>
<p>Once again, if you follow our Blog, you must have seen <a href="https://www.softwareheritage.org/?lang=fr">Software Heritage</a> (SWH) mentioned in our <a href="https://ocamlpro.com/blog/2022_01_31_2021_at_ocamlpro#free_software">yearly review article</a>.</p>
<p>Now you can also look at <a href="https://hal.archives-ouvertes.fr/hal-03626845/">SWH paper by OCamlPro for the JFLA2022 (French)</a> if you are looking for a detailed explanation of how important Software Heritage is to free software as a whole, and in what manner OCamlPro contributed to this gargantuan long-term endeavour of theirs.</p>
<p>This great collaboration was one of the highlights of last year from which arose an OCaml library called <a href="https://github.com/OCamlPro/swhid/">swhid</a> and the guaranteed perennity of all the packages found on opam.</p>
<p>The work we did to achieve this was to:</p>
<ul>
<li>add a few modules to the SWH architecture in order to store all the OCaml packages found on opam in the Library of Alexandria of open source software.
</li>
<li>release a library used for computing SWH identifiers
</li>
<li>add support in opam in order to allow a fallback on SWH architecture if a given package is missing from the <a href="https://github.com/ocaml/opam-repository">opam repository</a>
</li>
<li>patch the opam repository in order to detect already missing packages
</li>
</ul>
<h2>
<a id="alt-ergo" class="anchor"></a><a class="anchor-link" href="#alt-ergo">Alt-Ergo-Fuzz, hunting the bugs of the bug hunter</a>
</h2>
<p><em>The fuzzing of the SMT-Solver Alt-Ergo was done by Hichem Rami Ait El Hara, Guillaume Bury and Steven de Oliveira</em></p>
<p>As the last entry of OCamlPro's papers that have made it to this year's JFLA: a rundown of Hichem's work, guided by Guillaume and Steven, on developping a Fuzzer for <a href="https://github.com/OCamlPro/alt-ergo">Alt-Ergo</a>.</p>
<p>When it comes to critical systems, and industry-borne software, there are no limits to the requirements in safety, correctness, testing that would prove a program's reliability.</p>
<p>This is what SMT (Satisfiability Modulo Theory)-Solvers like Alt-Ergo are for: they use a complex mix of theory and implementation in order to prove, given a set of input theories, whether a program is acceptable... But SMT-Solvers, like any other program in the world, has to be searched for bugs or unwanted behaviours - this is the harsh reality of development.</p>
<p>With that in mind, Hichem sought to provide a fuzzer for Alt-Ergo to help <em>hunt the bugs of the bug hunter</em>: <a href="https://github.com/hra687261/alt-ergo-fuzz">Alt-Ergo-Fuzz</a>.</p>
<p>This tool has helped identify several bugs of unsoundness and crashes:</p>
<ul>
<li><a href="https://github.com/OCamlPro/alt-ergo/issues/474">#474</a> - Crash
</li>
<li><a href="https://github.com/OCamlPro/alt-ergo/issues/475">#475</a> - Crash
</li>
<li><a href="https://github.com/OCamlPro/alt-ergo/issues/476">#476</a> - Unsoundness
</li>
<li><a href="https://github.com/OCamlPro/alt-ergo/issues/477">#477</a> - Unsoundness
</li>
<li><a href="https://github.com/OCamlPro/alt-ergo/issues/479">#479</a> - Unsoundness
</li>
<li><a href="https://github.com/OCamlPro/alt-ergo/issues/481">#481</a> - Crash
</li>
<li><a href="https://github.com/OCamlPro/alt-ergo/issues/482">#482</a> - Crash
</li>
</ul>
<p>More details in <a href="https://hal.inria.fr/hal-03626861/">OCamlPro's paper for the JFLA2022 (Alt-Ergo-Fuzz)</a>.</p>
2021 at OCamlProhttps://ocamlpro.com/blog/2022_01_31_2021_at_ocamlpro2022-01-31T08:12:13Z2022-01-31T08:12:13Z
Muriel
OCamlPro
OCamlPro was created in 2011 to advocate the adoption of the OCaml language and Formal Methods in general in the industry. 2021 was a very special year as we celebrated our 10th anniversary! While building a team of highly-skilled engineers, we navigated through our expertise domains, programming la...<p>
<div class="figure">
<p>
<a href="/blog/assets/img/2021_ocamlpro.jpeg">
<img alt="Passing from one year to another is a great time to have a look back!" src="/blog/assets/img/2021_ocamlpro.jpeg"/>
</a>
<div class="caption">
Passing from one year to another is a great time to have a look back!
</div>
</p>
</div>
</p>
<p>OCamlPro was created in 2011 to advocate the adoption of the OCaml language and Formal Methods in general in the industry. 2021 was a very special year as we celebrated our 10th anniversary! While building a team of highly-skilled engineers, we navigated through our expertise domains, programming languages design, compilation and analysis, advanced developer tooling, formal methods, blockchains and high-value software prototyping.</p>
<p>In this article, as every year (see <a href="/blog/2021_02_02_2020_at_ocamlpro">last year's post</a>), we review some of the work we did during 2021, in many different worlds.</p>
<p><div id="tableofcontents">
<strong>Table of contents</strong></p>
<p><a href="#people">Newcomers at OCamlPro</a></p>
<p><a href="#apps">Real Life Modern Applications</a></p>
<ul>
<li><a href="#mlang">Modernizing the French Income Tax System</a>
</li>
<li><a href="#cobol">A First Step in the COBOL Universe</a>
</li>
<li><a href="#geneweb">Auditing a High-Scale Genealogy Application</a>
</li>
<li><a href="#mosaic">Improving an ecotoxicology platform</a>
</li>
</ul>
<p><a href="#ocaml">Contributions to OCaml</a></p>
<ul>
<li><a href="#flambda">Flambda Code Optimizer</a>
</li>
<li><a href="#opam">Opam Package Manager</a>
</li>
<li><a href="#community">LearnOCaml and TryOCaml</a>
</li>
<li><a href="#tooling">OCaml Documentation Hub</a>
</li>
<li><a href="#free_software">Plugging Opam into Software Heritage</a>
</li>
</ul>
<p><a href="#formal-methods">Tooling for Formal Methods</a></p>
<ul>
<li><a href="#alt-ergo">Alt-Ergo Development</a>
</li>
<li><a href="#club">Alt-Ergo Users’ Club and R&D Projects</a>
</li>
<li><a href="#dolmen">Dolmen Library for Automated Deduction Languages</a>
</li>
</ul>
<p><a href="#rust">Rust Developments</a></p>
<ul>
<li><a href="#mikino">SMT, Induction and Mikino</a>
</li>
<li><a href="#matla">Matla, a Project Manager for TLA+/TLC</a>
</li>
<li><a href="#rust-training">Rust Training at Onera</a>
</li>
<li><a href="#rust-audit">Audit of a Rust Blockchain Node</a>
</li>
</ul>
<p><a href="#blockchains">Scaling and Verifying Blockchains</a></p>
<ul>
<li><a href="#everscale">From Dune Network to FreeTON/EverScale</a>
</li>
<li><a href="#solidity">A Why3 Framework for Solidity</a>
</li>
</ul>
<p><a href="#events">Participations in Public Events</a></p>
<ul>
<li><a href="#osxp2021">Open Source Experience 2021</a>
</li>
<li><a href="#ow2021">OCaml Workshop at ICFP 2021</a>
</li>
<li><a href="#why3consortium">Joining the Why3 Consortium at the ProofInUse Seminar</a>
</li>
</ul>
<p><a href="#next">Towards 2022</a>
</div></p>
<p>As always, we warmly thank all our clients, partners, and friends, for their support and collaboration during this peculiar year!</p>
<h2>
<a id="people" class="anchor"></a><a class="anchor-link" href="#people">Newcomers at OCamlPro</a>
</h2>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/mini-team-2022-02-14.jpg">
<img alt="Some of the new and old members of the team: Pierre Chambart, Dario Pinto, Léo Andrès, Fabrice Le Fessant, Louis Gesbert, Artemiy Rozovyk, Muriel Shan Sei Fan, Nicolas Berthier, Vincent Laviron, Steven De Oliveira and Keryan Didier." src="/blog/assets/img/mini-team-2022-02-14.jpg"/>
</a>
<div class="caption">
Some of the new and old members of the team: Pierre Chambart, Dario Pinto, Léo Andrès, Fabrice Le Fessant, Louis Gesbert, Artemiy Rozovyk, Muriel Shan Sei Fan, Nicolas Berthier, Vincent Laviron, Steven De Oliveira and Keryan Didier.
</div>
</p>
</div>
</p>
<p>A company is nothing without its employees. This year, we have been delighted to welcome a great share of newcomers:</p>
<ul>
<li><em>Hichem Rami Ait El Hara</em> recently completed his master's degree in Computer Science. After an internship at OCamlPro, during which he developed a fuzzer for Alt-Ergo, he joined OCamlPro to work on Alt-Ergo and the verification of smart contracts. He will soon start a PhD on SMT solving.
</li>
<li><em>Nicolas Berthier</em> holds a PhD on synchronous programming for resource-constrained systems. With many years experience on model-checking, abstract interpretation, and software analysis, he joined OCamlPro to work on programming language compilation and analysis.
</li>
<li><em>Julien Blond</em> is a senior OCaml developer with a strong experience in formal verification of security software. He joined OCamlPro as both a project manager and a Coq expert.
</li>
<li><em>Keryan Didier</em> joined the team as a R&D engineer. He holds a PhD from University Pierre et Marie Curie, during which he developed an automated implementation method for hard real-time applications. Previously, he studied functional programming and language design at University Paris-Diderot. Keryan has been involved in the MLang project as well as the flambda2 project within OCamlPro's Compilation team.
</li>
<li><em>Mohamed Hernouf</em> recently completed his master's degree in Computer Science. After an internship at OCamlPro, working on the <a href="https://docs.ocaml.pro">OCaml Documentation Hub</a>, he joined OCamlPro and continues to work on the documentation hub and other OCaml applications.
</li>
<li><em>Dario Pinto</em> is a student at the <a href="https://42.fr/en/homepage/">42Paris</a> School of Computer Science. He joined OCamlPro in a work-study contract for two years.
</li>
<li><em>Artemiy Rozovyk</em> recently completed his master's degree in Computer Science. He joined OCamlPro to work on the development of applications for the EverScale and Avalanche blockchains.
</li>
</ul>
<h2>
<a id="apps" class="anchor"></a><a class="anchor-link" href="#apps">Real Life Modern Applications</a>
</h2>
<h3>
<a id="mlang" class="anchor"></a><a class="anchor-link" href="#mlang">Modernizing the French Income Tax System</a>
</h3>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/income-tax.jpg">
<img alt="The M language, designed in the 80s for the Income Tax, is now being rewritten and extended in OCaml." src="/blog/assets/img/income-tax.jpg"/>
</a>
<div class="caption">
The M language, designed in the 80s for the Income Tax, is now being rewritten and extended in OCaml.
</div>
</p>
</div>
</p>
<p>The M language is a very old programming language developed by the French tax administration to compute income taxes. Recently, Denis Merigoux and Raphael Monat have implemented a <a href="https://github.com/MLanguage/mlang">new compiler in OCaml</a> for the M language. This new compiler shows better performance, clearer semantics, and achieves greater maintainability than the former compiler. OCamlPro is now involved in strengthening this new compiler, to put it in production and eventually compute the taxes of more than 30 million French families.</p>
<h3>
<a id="cobol" class="anchor"></a><a class="anchor-link" href="#cobol">A First Step in the COBOL Universe</a>
</h3>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/cobol.jpg">
<img alt="Recent studies still estimate that COBOL has the highest amount of lines of code running." src="/blog/assets/img/cobol.jpg"/>
</a>
<div class="caption">
Recent studies still estimate that COBOL has the highest amount of lines of code running.
</div>
</p>
</div>
</p>
<p>Born more than 60 years ago, <a href="https://wikipedia.org/wiki/COBOL">COBOL</a> is still said to be the most used language in the world, in terms of the number of lines running in computers, though many people forecast it would disappear at the edge of the 21st century. With more than 300 reserved keywords, it is also one of the most complex languages to parse and analyse. It's not enough to scare the developers at OCamlPro: while helping one of the biggest COBOL users in France to translate its programs into the <a href="https://gnucobol.sourceforge.io/">GNUCobol open-source compiler</a>, OCamlPro built a strong expertise of COBOL and mainframes, and developed a powerful parser of COBOL that will help us bring modern development tools to the COBOL developers.</p>
<h3>
<a id="geneweb" class="anchor"></a><a class="anchor-link" href="#geneweb">Auditing a High-Scale Genealogy Application</a>
</h3>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/genealogie.jpg">
<img alt="Geneweb was developed in the 90s to manage family trees... and is still managing them!" src="/blog/assets/img/genealogie.jpg"/>
</a>
<div class="caption">
Geneweb was developed in the 90s to manage family trees... and is still managing them!
</div>
</p>
</div>
</p>
<p><a href="https://geneweb.tuxfamily.org/wiki/GeneWeb/fr">Geneweb</a> is one of the most powerful software to manage and share genealogical data to date. Written in OCaml more than 20 years ago, it contains a web server and complex algorithms to compute information on family trees. It is used by <a href="https://en.geneanet.org/">Geneanet</a>, which is one of the leading companies in the genealogy field, to store more than 800,000 family trees and more than 7 billion names of ancestors. OCamlPro is now working with Geneanet to improve Geneweb and make it scale to even larger data sets.</p>
<h3>
<a id="mosaic" class="anchor"></a><a class="anchor-link" href="#mosaic">Improving an ecotoxicology platform</a>
</h3>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/labo.jpg">
<img alt="Mosaic is used by ecotoxicologists and regulators to obtain advanced and innovative methods for environmental risks assessment." src="/blog/assets/img/labo.jpg"/>
</a>
<div class="caption">
Mosaic is used by ecotoxicologists and regulators to obtain advanced and innovative methods for environmental risks assessment.
</div>
</p>
</div>
</p>
<p>The <a href="https://mosaic.univ-lyon1.fr/">Mosaic</a> platform helps researchers, industrials actors and regulators in the field of ecotoxicology by providing an easy way to run various statistical analyses. All the user has to do is to enter some data on the web interface, then computations are run on the server and the results are displayed. The platform is fully written in OCaml and takes care of calling the mathematical model which is written in R. OCamlPro modernised the project in order to ease maintainance and new contributions. In the process, we discovered <a href="https://github.com/pveber/morse/issues/286">bugs introduced by new R versions</a> (without any kind of warning). Then we developped a new interface for data input, it's similar to a spreadsheet and much more convenient than having to write raw CSV. During this work, we had the opportunity to contribute to some other OCaml packages such as <a href="https://github.com/mfp/ocaml-leveldb">leveldb</a> or write new ones such as <a href="https://github.com/OCamlPro/agrid">agrid</a>.</p>
<h2>
<a id="ocaml" class="anchor"></a><a class="anchor-link" href="#ocaml">Contributions to OCaml</a>
</h2>
<h3>
<a id="flambda" class="anchor"></a><a class="anchor-link" href="#flambda">Flambda Code Optimizer</a>
</h3>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/flambda_2021.jpeg">
<img alt="Flambda2 is a powerful code optimizer for the OCaml compiler." src="/blog/assets/img/flambda_2021.jpeg"/>
</a>
<div class="caption">
Flambda2 is a powerful code optimizer for the OCaml compiler.
</div>
</p>
</div>
</p>
<p>OCamlPro is proud to be working on Flambda2, an ambitious OCaml optimizing compiler project, initiated with Mark Shinwell from Jane Street, our long-term partner and client. Flambda focuses on reducing the runtime cost of abstractions and removing as many short-lived allocations as possible. Jane Street has launched large-scale testing of flambda2, and on our side, we have documented the design of some key algorithms. In 2021, the Flambda team grew bigger with Keryan. Along with the considerable amount of fixes and improvements, this will allow us to publish <a href="https://github.com/ocaml-flambda/flambda-backend">Flambda2</a> in the coming months!</p>
<p>In other OCaml compiler news, 2021 saw the long-awaited merge of the multicore branch into the official development branch. This was thanks to the amazing work of many people, including our own, Damien Doligez. This is far from the end of the story though, and we're looking forward to both further contributing to the compiler (fixing bugs, re-enabling support for all platforms) and making use of the features in our own programs.</p>
<p><em>This work is allowed thanks to Jane Street’s funding.</em></p>
<h3>
<a id="opam" class="anchor"></a><a class="anchor-link" href="#opam">Opam Package Manager</a>
</h3>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/opam_2021.jpg">
<img alt="A large set of new features have been implemented in Opam in 2021." src="/blog/assets/img/opam_2021.jpg"/>
</a>
<div class="caption">
A large set of new features have been implemented in Opam in 2021.
</div>
</p>
</div>
</p>
<p><a href="https://opam.ocaml.org">Opam</a> is the OCaml source-based package manager. The first specification draft was written <a href="https://opam.ocaml.org/about.html">in early 2012</a> and went on to become OCaml’s official package manager — though it may be used for other languages and projects, since Opam is language-agnostic! If you need to install, upgrade and manage your compiler(s), tools and libraries easily, Opam is meant for you. It supports
multiple simultaneous compiler installations, flexible package constraints, and a Git-friendly development workflow.</p>
<p>Opam development and maintenance is a collaboration between OCamlPro, with Raja & Louis, and OCamlLabs, with David Allsopp & Kate Deplaix.</p>
<p><a href="https://github.com/ocaml/opam/releases">Our 2021 work on opam</a> lead to the final release of the long-awaited opam 2.1, three versions of opam 2.0 and two versions of opam 2.1 with small fixes.</p>
<p>Opam 2.1 introduced several new features:</p>
<ul>
<li>Integration of system dependencies (formerly the opam-depext plugin)
</li>
<li>Creation of lock files for reproducible installations (formerly the opam-lock plugin)
</li>
<li>Switch invariants, replacing the "base packages" in opam 2.0 and allowing for easier compiler upgrades
</li>
<li>Improved option configurations
</li>
<li>CLI versioning, allowing cleaner deprecations for opam now and also improvements to semantics in future without breaking backwards-compatibility
</li>
<li>opam root readability by newer and older versions, even if the format changed
</li>
<li>Performance improvements to opam-update, conflict messages, and many other areas
</li>
</ul>
<p>Take a stroll through the <a href="https://opam.ocaml.org/blog//opam-2-1-0">blog post</a> for a closer look.</p>
<p>In 2021, we also prepared the soon to-be alpha release of opam 2.2 version. It will provide a better handling of the Windows ecosystem, integration of Software
Heritage <a href="#foundation">archive fallback</a>, better UI on user interactions, recursively pinning of projects, fetching optimisations, etc.</p>
<p><em>This work is greatly helped by Jane Street’s funding and support.</em></p>
<h3>
<a id="community" class="anchor"></a><a class="anchor-link" href="#community">LearnOCaml and TryOCaml</a>
</h3>
<p>We have also been active in the maintainance of <a href="https://github.com/ocaml-sf/learn-ocaml">Learn-ocaml</a>. What was originally designed as the platform for the <a href="https://www.fun-mooc.fr/en/courses/introduction-functional-programming-ocaml/">OCaml
MOOC</a> is now a tool in the hands of OCaml teachers worldwide, managed and funded by <a href="http://ocaml-sf.org/">the OCaml Foundation</a>.</p>
<p>The work included a well overdue port to OCaml 4.12; generation of portable executables (automatic through CI) for much easier deployment and use of the command-line client; as well as many quality-of-life and usability improvements following from two-way conversations with many teachers.</p>
<p>On a related matter, we also reworked our on-line OCaml editor and toplevel <a href="https://try.ocaml.pro">TryOCaml</a>, improving its design and adding features like code snippet sharing. We were glad to see that, in these difficult times, these tools proved useful to both teachers and students, and look forward to improving them further.</p>
<h3>
<a id="tooling" class="anchor"></a><a class="anchor-link" href="#tooling">OCaml Documentation Hub</a>
</h3>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/ocaml_2021.jpg">
<img alt="The OCaml Documentation Hub includes browsable documentation and sources for more than 2000 Opam packages." src="/blog/assets/img/ocaml_2021.jpg"/>
</a>
<div class="caption">
The OCaml Documentation Hub includes browsable documentation and sources for more than 2000 Opam packages.
</div>
</p>
</div>
</p>
<p>As one of the biggest user of OCaml, OCamlPro aims at facilitating daily use of OCaml by developing a lot of open-source tooling.</p>
<p>One of our main contributions to the OCaml ecosystem in 2021 was probably the OCaml Documentation Hub at <a href="https://docs.ocaml.pro">docs.ocaml.pro</a>.</p>
<p>The OCaml Documentation Hub is a website that provides documentation for more than 2000 OPAM packages, among which of course the most popular ones, with inter-package documentation links! The website also contains browsable sources for all these packages, and a search engine to discover useful OCaml functions, modules, types and classes.</p>
<p>All this documentation is generated using our custom tool
<a href="https://github.com/OCamlPro/digodoc">Digodoc</a>. Though it's not worth
a specific section, we also kept maintaining
<a href="https://github.com/OCamlPro/drom">Drom</a>, our layer on Dune and Opam
that most of our recent projects use.</p>
<h3>
<a id="free_software" class="anchor"></a><a class="anchor-link" href="#free_software">Pluging Opam into Software Heritage</a>
</h3>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/Svalbard_seed_vault.jpg">
<img alt="Svalbard Global Seed Vault in Norway." src="/blog/assets/img/Svalbard_seed_vault.jpg"/>
</a>
<div class="caption">
Svalbard Global Seed Vault in Norway.
</div>
</p>
</div>
</p>
<p>Last year has also seen the long awaited collaboration between Software Heritage and OCamlPro happen.</p>
<p>Thanks to a grant by the <a href="https://www.softwareheritage.org/2021/04/20/connecting-ocaml/">Alfred P. Sloan Foundation</a>, OCamlPro has been able to collaborate with our partners at Software Heritage and manage to further expand the coverage of this gargantuan endeavour of theirs by archiving 3516 opam packages.
In effect, the main benefits of this Open Source collaboration have been:</p>
<ul>
<li>The addition of several modules to the Software Heritage architecture, allowing the archiving of said opam packages;
</li>
<li>The publication of an OCaml library allowing to work with <a href="https://github.com/OCamlPro/swhid">SWHID</a>s;
</li>
<li>An implementation of a possible fallback onto Software Heritage if a given package on opam is no longer available;
</li>
<li>A fix for the official opam repository in order to identify already missing packages.
</li>
</ul>
<p>Not long after Software was at last acknowledged by Unesco as part of the World Heritage, we were thrilled to be part of this great and meaningful initiative. We could feel how true passion remained throughout our interactions and long after the work was done.</p>
<h2>
<a id="formal-methods" class="anchor"></a><a class="anchor-link" href="#formal-methods">Tooling for Formal Methods</a>
</h2>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/pure-mathematics-formulae-blackboard.jpg">
<img alt="Avionics, blockchains, cyber-security, cloud, etc... formal methods are spreading in the computer industry." src="/blog/assets/img/pure-mathematics-formulae-blackboard.jpg"/>
</a>
<div class="caption">
Avionics, blockchains, cyber-security, cloud, etc... formal methods are spreading in the computer industry.
</div>
</p>
</div>
</p>
<h3>
<a id="alt-ergo" class="anchor"></a><a class="anchor-link" href="#alt-ergo">Alt-Ergo Development</a>
</h3>
<p>OCamlPro develops and maintains <a href="https://alt-ergo.ocamlpro.com/">Alt-Ergo</a>, an automatic solver of mathematical formulas designed for program verification and based on Satisfiability Modulo Theories (SMT). Alt-Ergo was initially created within the <a href="https://vals.lri.fr/">VALS</a> team at <a href="https://www.universite-paris-saclay.fr/en">University of Paris-Saclay</a>.</p>
<p>In 2021, we continued to focus on the maintainability of our solver. We released versions 2.4.0 and <a href="https://github.com/OCamlPro/alt-ergo/releases/tag/2.4.1">2.4.1</a> in January and July respectively, with 2.4.1 containing a bugfix as well as some performance improvements.</p>
<p>In order to increase our test coverage, we instrumented Alt-Ergo so that we could run it using <a href="https://github.com/google/AFL">afl-fuzz</a>. Although this is a proof of concept, and has yet to be integrated into Alt-ergo's continuous integration, it has already helped us find a few bugs, such as <a href="https://github.com/OCamlPro/alt-ergo/pull/489">this</a>.</p>
<h3>
<a id="club" class="anchor"></a><a class="anchor-link" href="#club">Alt-Ergo Users’ Club and R&D Projects</a>
</h3>
<p>We thank our partners from the <a href="https://alt-ergo.ocamlpro.com/#club">Alt-Ergo Users’ Club</a>, Adacore, CEA List, MERCE (Mitsubishi Electric R&D Centre Europe), Thalès, and Trust-In-Soft, for their trust. Their support allows us to maintain our tool.</p>
<p>The club was launched in 2019 and the third annual meeting of the Alt-Ergo Users’ Club was held in early April 2021. Our annual meeting is the perfect place to review each partner’s needs regarding Alt-Ergo. This year, we had the pleasure of receiving our partners to discuss the roadmap for future Alt-Ergo features and enhancements. If you want to join us for the next meeting (coming soon), contact us!</p>
<p>Finally, we will be able to merge into the main branch of Alt-Ergo some of the work we did in 2020. Thanks to our partner MERCE (Mitsubishi Electric R&D Centre Europe), we worked on the SMT model generation. Alt-Ergo is now (partially) able to output a model in the smt-lib2 format. Thanks to the <a href="http://why3.lri.fr/">Why3 team</a> from University of Paris-Saclay, we hope that this work will be available in the Why3 platform to help users in their program verification efforts. OCamlPro was very happy to join the <a href="https://proofinuse.gitlabpages.inria.fr/">Why3 Consortium</a> this year, for even more collaborations to come!</p>
<p><em>This work is funded in part by the FUI R&D Project LCHIP, MERCE, Adacore and with the support of the <a href="https://alt-ergo.ocamlpro.com/#club">Alt-Ergo Users’ Club</a>.</em></p>
<h3>
<a id="dolmen" class="anchor"></a><a class="anchor-link" href="#dolmen">Dolmen Library for Automated Deduction Languages</a>
</h3>
<p><a href="https://github.com/Gbury/dolmen">Dolmen</a> is a powerful library providing flexible parsers and typecheckers for many languages used in automated deduction.</p>
<p>The ongoing work on using the Dolmen library as frontend for Alt-Ergo has progressed considerably, both on the side of dolemn which has been extended to support Alt-Ergo's native language in <a href="https://github.com/Gbury/dolmen/pull/89">this PR</a>, and on Alt-Ergo's side to add dolmen as a frontend that can be chosen in <a href="https://github.com/OCamlPro/alt-ergo/pull/491">this PR</a>. Once these are merged, Alt-Ergo will be able to read input problems in new languages, such as <a href="http://www.tptp.org/">TPTP</a>!</p>
<h2>
<a id="rust" class="anchor"></a><a class="anchor-link" href="#rust">Rust Developments</a>
</h2>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/logo_rust.jpg">
<img alt="Rust is a very good complement to OCaml for performance critical applications." src="/blog/assets/img/logo_rust.jpg"/>
</a>
<div class="caption">
Rust is a very good complement to OCaml for performance critical applications.
</div>
</p>
</div>
</p>
<h3>
<a id="mikino" class="anchor"></a><a class="anchor-link" href="#mikino">SMT, Induction and Mikino</a>
</h3>
<p>A few months ago, we published a series of posts: <a href="/blog/2021_10_14_verification_for_dummies_smt_and_induction"><em>verification for dummies: SMT and induction</em></a>. These posts introduce and discuss SMT solvers, the notion of induction and that of invariant strengthening. They rely on <a href="https://github.com/OCamlPro/mikino_bin"><em>mikino</em></a>, a simple software we wrote that can analyze simple transition systems and perform SMT-based induction checks (as well as BMC, <em>i.e.</em> bug-finding). We wrote mikino in Rust with readability and ergonomics in mind: mikino showcases the basics of writing an SMT-based model checker performing induction. The posts are very hands-on and leverage mikino's high-quality output to discuss induction and invariant strengthening, with examples that readers can run and edit themselves.</p>
<h3>
<a id="matla" class="anchor"></a><a class="anchor-link" href="#matla">Matla, a Project Manager for TLA+/TLC</a>
</h3>
<p>During 2021 we ended up using the TLA+ language and its associated TLC verification engine in several completely unrelated projects. TLC is an amazing tool, but it is not suited to handle a TLA+ project with many modules (files), regression tests, <em>etc.</em> In particular, TLA+ is not a typed language. This means that TLA+ code tends to have many <em>checks</em> (dynamic assertions) checking that quantities have the expected type. This is fine, albeit a bit tedious, to some extent, but as the code grows bigger the analysis conducted by TLC can become very, very expensive. Eventually it is not reasonable to keep assert-type-checking everything since it contributes to TLC's analysis exploding.</p>
<p>As TLA+/TLC users, we are currently developing <code>matla</code> which <code>ma</code>nages <code>TLA</code>+ projects. Written in Rust, matla is heavily inspired by the Rust ecosystem, in particular <a href="https://doc.rust-lang.org/cargo">cargo</a>. Matla has not been publicly released yet as we are waiting for more feedback from early users. We do use it internally however as its various features make our TLA+ projects much simpler:</p>
<ul>
<li>handling the TLA toolchain (download, <code>PATH</code>, updates...) for the user;
</li>
<li>provide a <code>Matla</code> module with <em>"debug assertions"</em> helpers: these assertions are active in <em>debug</em> mode, which is the default when running <code>matla run</code>. Passing <code>--release</code> to matla's run mode however compiles all debug assertions away; this allows to type-check everything when debugging while making sure release runs do not pay the price of these checks;
</li>
<li>handle <em>integration</em> testing: matla projects have a <code>tests</code> directory where users can write tests (TLA files with a <code>.tla</code> and <code>.cfg</code> files) and specify if they are expected to be successful or to fail (and how);
</li>
<li>understand and transform TLC's output to improve user feedback, in particular when TLC yields an error (not good enough yet and is the reason we have not released yet); matla also parses and prettifies TLC's counterexample traces by formatting values, formatting states (aggregation of values), and render traces of states graphically using ASCII art.
</li>
</ul>
<h3>
<a id="rust-training" class="anchor"></a><a class="anchor-link" href="#rust-training">Rust Training at Onera</a>
</h3>
<p>The ongoing pandemic is undoubtingly impacting our professional training activities. Still, we had the opportunity to set up a Rust training session with applied researchers at ONERA during the summer. The session spanned over a week (about seven hours a day) and was our first fully remote Rust training session. We still believe on-site training (when possible) is better, full remote offers some flexibility (spreading out the training over several weeks for instance) and our experience with ONERA shows that it can work in practice with the right technology. Interestingly, it turns out that some aspects of the session actually work better with remote: hands-on exercises and projects for instance benefit from screen sharing. Discussing code with one participant is done with screen sharing, meaning all participants can follow along if they so chose.</p>
<p>Long story short, fully remote training is something we now feel confident proposing to our clients as a flexible alternative to on-site training.</p>
<h3>
<a id="rust-audit" class="anchor"></a><a class="anchor-link" href="#rust-audit">Audit of a Rust Blockchain Node</a>
</h3>
<p>We participated in a contest aiming at writing a high-level specification of the (compiler for) the TON VM assembler, in particular its instructions and how they are compiled. This contest was a first step towards applying Formal Methods, and in particular formal verification, to the TON VM. We are happy to report that we finished first in this context, and are looking forward to future contests pushing Formal Methods further in the Everscale blockchain.</p>
<h2>
<a id="blockchains" class="anchor"></a><a class="anchor-link" href="#blockchains">Scaling and Verifying Blockchains</a>
</h2>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/chain.jpg">
<img alt="OCamlPro is involved in several projects with high-throughput blockchains, such as EverScale and Avalanche." src="/blog/assets/img/chain.jpg"/>
</a>
<div class="caption">
OCamlPro is involved in several projects with high-throughput blockchains, such as EverScale and Avalanche.
</div>
</p>
</div>
</p>
<h3>
<a id="everscale" class="anchor"></a><a class="anchor-link" href="#everscale">From Dune Network to FreeTON/EverScale</a>
</h3>
<p>In 2019-2020, we concentrated our efforts on the development of blockchains on adding new programming languages to the <a href="https://dune.network">Dune Network</a> ecosystem, in collaboration with Origin Labs. You can read more about <a href="/blog/2020_06_09_a_dune_love_story_from_liquidity_to_love">Love</a> and <a href="https://medium.com/dune-network/deploy-your-first-solidity-contract-on-dune-network-a96a53169a91">Solidity for Dune</a>.</p>
<p>At the end of 2020, it became clear that high-throughput was becoming a major requirement for blockchain adoption in real applications, and that the Tezos-based technology behind Dune Network could not compete with high-performance blockchains such as <a href="https://solana.com">Solana</a> or <a href="https://www.avax.network">Avalanche</a>. Following this observation, the Dune Network community decided to merge with the FreeTON community early in 2021. Initially developed by Telegram, the TON project was stopped under legal threats, but another company, <a href="https://tonlabs.io/main">TONLabs</a>, restarted the project from its open-source code under the FreeTON name, and the blockchain was launched mid-2020. FreeTON, now renamed <a href="https://everscale.network/">EverScale</a>, is today the fastest blockchain in the world, with around 55,000 transactions per second on an open network sustained during several days.</p>
<p>EverScale uses a very unique community-driven development process: contests are organized by thematic sub-governances (subgov) to improve the ecosystem, and contestants win prices in tokens to reward their high-quality work. During 2021, OCamlPro got involved in several of these sub-governances, both as a jury, in the Formal Methods subgov and the Developer Experience subgov, and a contestant winning multiple prices for the development of smart contracts (<a href="https://medium.com/ocamlpro-blockchain-fr/zk-snarks-freeton-et-ocamlpro-796adc323351">zksnarks use-cases</a>, <a href="https://github.com/OCamlPro/freeton_auctions">auctions</a> and <a href="https://github.com/OCamlPro/devex-27-recurring-payments">recurring payments</a>), the audit of several smart contracts (<a href="https://github.com/OCamlPro/formet-17-true-nft-audit">TrueNFT audit</a>, <a href="https://github.com/OCamlPro/formet-14-rsquad-smv-audit">Smart Majority Voting audit</a> and <a href="https://github.com/OCamlPro/formet-13-radiance-dex-audit">a DEX audit</a>), and the specification of some Rust components in the node (the <a href="https://formet.gov.freeton.org/submission?proposalAddress=0%3A91a2ecea35ee9405ccb572c577cb6ba139491b493d86191e8e46a30fdd4b01e5&submissionId=5">Assembler module</a>).</p>
<p>This work in the EverScale ecosystem gave us the opportunity to develop some interesting OCaml contributions:</p>
<ul>
<li>We improved our <a href="https://github.com/OCamlPro/ocaml-solidity">ocaml-solidity</a> parser to support all the extensions of the <a href="https://solidity.readthedocs.io/en/v0.6.8/">Solidity</a> language required to parse EverScale contracts;
</li>
<li>We developed an <a href="https://github.com/OCamlPro/freeton_ocaml_sdk">OCaml binding</a> for the EverScale Rust SDK;
</li>
<li>We developed a command line <a href="https://github.com/OCamlPro/freeton_wallet">wallet called <code>ft</code></a> to help developers easily deploy the contracts and interact with them;
</li>
<li>We developed a <a href="https://gitlab.com/dune-network/ton-merge">bridge</a> between Dune Network and EverScale to swap DUN tokens into EVER tokens.
</li>
</ul>
<p><em>This work was funded by the EverScale community through contests.</em></p>
<h3>
<a id="solidity" class="anchor"></a><a class="anchor-link" href="#solidity">A Why3 Framework for Solidity</a>
</h3>
<p>Our most recent work on the EverScale blockchain has been targetted into the development of a <a href="http://why3.lri.fr/">Why3 framework</a> to formally verify EverScale Solidity contracts. At the same time, we have been involved in the specification of several big smart contract projects, and we plan to use this framework in practice on these projects as soon as their formal verification starts.</p>
<p>We hope to be able to extend this work to EVM based Solidity contracts, as available on Ethereum and Avalanche and many other blockchains. By comparison with other frameworks that work directly on the EVM bytecode, this work focused directly on the Solidity language should make the verification much higher-level and so more straight-forward.</p>
<h2>
<a id="events" class="anchor"></a><a class="anchor-link" href="#events">Participations in Public Events</a>
</h2>
<h3>
<a id="osxp2021" class="anchor"></a><a class="anchor-link" href="#osxp2021">Open Source Experience 2021</a>
</h3>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/ospx_1.jpg">
<img alt="Stéfane Fermigier (Abilian) and Pierre Baudracco (BlueMind) from Systematic Open Source Hub meet Amélie de Montchalin (French Minister of Public Service) in front of OCamlPro's booth." src="/blog/assets/img/ospx_1.jpg"/>
</a>
<div class="caption">
Stéfane Fermigier (Abilian) and Pierre Baudracco (BlueMind) from Systematic Open Source Hub meet Amélie de Montchalin (French Minister of Public Service) in front of OCamlPro's booth.
</div>
</p>
</div>
</p>
<p>We were present at the new edition of the <a href="https://www.opensource-experience.com/">Open Source Experience</a> in Paris! Our booth welcomed our visitors to discuss tailor-made software solutions. Fabrice had the opportunity to give a presentation on FreeTON (Now EverScale) <a href="https://www.youtube.com/watch?v=EEtE4YpWbjw">(Watch the video)</a>, the high speed blockchain he is working on. We were delighted to meet the open source community. Moreover, Amélie de Montchalin, French Minister of Transformation and Public Service, was present to the Open Source Experience to thank all the free software actors. A very nice experience for us, we can't wait to be back in 2022!</p>
<h3>
<a id="ow2021" class="anchor"></a><a class="anchor-link" href="#ow2021">OCaml Workshop at ICFP 2021</a>
</h3>
<p>We participated in the programming competition organized by the International Conference on Functional Programming (ICFP). 3 talks we submitted to the OCaml Workshop were accepted!</p>
<ul>
<li>Fabrice, Mohamed and Louis presented <a href="https://github.com/OCamlPro/digodoc">Digodoc</a>, our new tool that builds a graph of an opam switch, associating files, libraries and opam packages into a cyclic graph of inclusions and dependencies;
</li>
<li>Fabrice spoke about <a href="https://github.com/OCamlPro/opam-bin">Opam-bin</a>, a plugin that builds binary opam packages on the fly;
</li>
<li>Lastly, Steven and David presented <strong>Love</strong>, a smart contract language embedded in the Dune Network blockchain.
It was an opportunity to present our tools and projects, and above all to discuss with the OCaml community. We're delighted to take part in this adventure every year!
</li>
</ul>
<h3>
<a id="why3consortium" class="anchor"></a><a class="anchor-link" href="#why3consortium">Joining the Why3 Consortium at the ProofInUse seminar</a>
</h3>
<p>We were very happy to join the Why3 Consortium while participating the ProofinUse joint lab <a href="https://proofinuse.gitlabpages.inria.fr/meeting-2021oct21/">seminar on counterexamples</a> on October the 1st. Many thanks to Claude Marché for his role as scientific shepherd.</p>
<h2>
<a id="next" class="anchor"></a><a class="anchor-link" href="#next">Towards 2022</a>
</h2>
<p>
<div class="figure">
<p>
<a href="/blog/assets/img/towards_2022.jpeg">
<img alt="Though 2022 is just starting, it already sounds like a great year with many new interesting and innovative projects for OCamlPro." src="/blog/assets/img/towards_2022.jpeg"/>
</a>
<div class="caption">
Though 2022 is just starting, it already sounds like a great year with many new interesting and innovative projects for OCamlPro.
</div>
</p>
</div>
</p>
<p>After a phase of adaptation to the health context in 2020 and a year of growth in 2021, we are motivated to start the year 2022 with new and very enriching projects, new professional encounters, leading to the growth of our teams. If you want to be part of a passionate team, we would love to hear from you! We are currently actively hiring. Check the available job positions and follow the application instructions!</p>
<p>All our amazing achievements are the result of incredible people and teamwork, kudos to Fabrice, Pierre, Louis, Vincent, Damien, Raja, Steven, Guillaume, David, Adrien, Léo, Keryan, Mohamed, Hichem, Dario, Julien, Artemiy, Nicolas, Elias, Marla, Aurore and Muriel.</p>
Verification for Dummies: SMT and Induction https://ocamlpro.com/blog/2021_10_14_verification_for_dummies_smt_and_induction2021-10-14T08:12:13Z2021-10-14T08:12:13Z
Adrien Champion
Adrien Champion adrien.champion@ocamlpro.com
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. These posts broadly discusses induction as a formal verification technique, which here really means formal program verification. I will use concrete, runnabl...<ul>
<li>Adrien Champion <a href="mailto:adrien.champion@ocamlpro.com">adrien.champion@ocamlpro.com</a>
</li>
<li><a href="http://creativecommons.org/licenses/by-sa/4.0/"></a> This work is licensed under a <a href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>.
</li>
</ul>
<p>These posts broadly discusses <em>induction</em> as a <em>formal verification</em> technique, which here really means <em>formal program verification</em>. I will use concrete, runnable examples whenever possible. Some of them can run directly in a browser, while others require to run small easy-to-retrieve tools locally. Such is the case for pretty much all examples dealing directly with induction.</p>
<p>The next chapters discuss the following notions:</p>
<ul>
<li>formal logics and formal frameworks;
</li>
<li>SMT-solving: modern, <em>low-level</em> verification building blocks;
</li>
<li>declarative transition systems;
</li>
<li>transition system unrolling;
</li>
<li>BMC and induction proofs over transition systems;
</li>
<li>candidate strengthening.
</li>
</ul>
<p>The approach presented here is far from being the only one when it comes to program verification. It happens to be relatively simple to
understand, and I believe that familiarity with the notions discussed here makes understanding other approaches significantly easier.</p>
<p>This book thus hopes to serve both as a relatively deep dive into the specific technique of SMT-based induction, as well as an example of the technical challenges inherent to both developing and using automated proof engines.</p>
<p>Some chapters contain a few pieces of Rust code. Usually to provide a runnable version of a system under discussion, or to serve as example of actual code that we want to encode and verify. Some notions of Rust could definitely help in places, but this is not mandatory (probably).</p>
<p>Read more here: <a href="https://github.com/rust-lang/this-week-in-rust/pull/2479"></a><a href="https://ocamlpro.github.io/verification_for_dummies/">https://ocamlpro.github.io/verification_for_dummies/</a></p>
Generating static and portable executables with OCamlhttps://ocamlpro.com/blog/2021_09_02_generating_static_and_portable_executables_with_ocaml2021-09-02T08:12:13Z2021-09-02T08:12:13Z
Louis Gesbert
Distributing OCaml software on opam is great (if I dare say so myself), but sometimes you need to provide your tools to an audience outside of the OCaml community, or just without recompilations or in a simpler way.
However, just distributing the locally generated binaries requires that the users ha...<blockquote>
<p>Distributing OCaml software on opam is great (if I dare say so myself), but sometimes you need to provide your tools to an audience outside of the OCaml community, or just without recompilations or in a simpler way.</p>
<p>However, just distributing the locally generated binaries requires that the users have all the needed shared libraries installed, and a compatible libc. It's not something you can assume in general, and even if you don't need any C shared library or are confident enough it will be installed everywhere, the libc issue will arise for anyone using a distribution based on a different kind, or a little older than the one you used to build.</p>
<p>There is no built-in support for generating static executables in the OCaml compiler, and it may seem a bit tricky, but it's not in fact too complex to do by hand, something you may be ready to do for a release that will be published. So here are a few tricks, recipes and advice that should enable you to generate truly portable executables with no external dependency whatsoever. Both Linux and macOS will be treated, but the examples will be based on Linux unless otherwise specified.</p>
</blockquote>
<h2>Example</h2>
<p>I will take as an example a trivial HTTP file server based on <a href="https://github.com/aantron/dream">Dream</a>.</p>
<blockquote>
<details>
<summary>Sample code</summary>
<h5>fserv.ml</h5>
<pre><code class="language-ocaml">let () = Dream.(run @@ logger @@ static ".")
</code></pre>
<h5>fserv.opam</h5>
<pre><code class="language-python">opam-version: "2.0"
depends: ["ocaml" "dream"]
</code></pre>
<h5>dune-project</h5>
<pre><code class="language-lisp">(lang dune 2.8)
(name fserv)
</code></pre>
</details>
</blockquote>
<p>The relevant part of our <code>dune</code> file is just:</p>
<pre><code class="language-lisp">(executable
(public_name fserv)
(libraries dream))
</code></pre>
<p>This is how we check the resulting binary:</p>
<pre><code class="language-shell-session">$ dune build fserv.exe
ocamlc .fserv.eobjs/byte/dune__exe__Fserv.{cmi,cmo,cmt}
ocamlopt .fserv.eobjs/native/dune__exe__Fserv.{cmx,o}
ocamlopt fserv.exe
$ file _build/default/fserv.exe
_build/default/fserv.exe: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=1991bb9f1d67807411c93f6fb6ec46b4a0ee8ed5, for GNU/Linux 3.2.0, with debug_info, not stripped
$ ldd _build/default/fserv.exe
linux-vdso.so.1 (0x00007ffe97690000)
libssl.so.1.1 => /usr/lib/x86_64-linux-gnu/libssl.so.1.1 (0x00007fd6cc636000)
libcrypto.so.1.1 => /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1 (0x00007fd6cc342000)
libev.so.4 => /usr/lib/x86_64-linux-gnu/libev.so.4 (0x00007fd6cc330000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fd6cc30e000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fd6cc1ca000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fd6cc1c4000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fd6cbffd000)
/lib64/ld-linux-x86-64.so.2 (0x00007fd6cced7000)
</code></pre>
<p>(on macOS, replace <code>ldd</code> with <code>otool -L</code>; dune output is obtained with <code>(display short)</code> in <code>~/.config/dune/config</code>)</p>
<p>So let's see how to change this result. Basically, here, <code>libev</code>, <code>libssl</code> and <code>libcrypto</code> are required shared libraries that may not be installed on every system, while all the others are part of the core system:</p>
<ul>
<li><code>linux-vdso</code>, <code>libdl</code> and <code>ld-linux</code> are concerned with the dynamic loading of shared objects ;
</li>
<li><code>libm</code> and <code>libpthread</code> are extensions of the core <code>libc</code> that are tightly bound to it, and always installed.
</li>
</ul>
<h2>Statically linking the libraries</h2>
<p>In simple cases, static linking can be turned on as easily as passing the <code>-static</code> flag to the C compiler: through OCaml you will need to pass <code>-cclib -static</code>. We can add that to our <code>dune</code> file:</p>
<pre><code class="language-lisp">(executable
(public_name fserv)
(flags (:standard -cclib -static))
(libraries dream))
</code></pre>
<p>... which gives:</p>
<pre><code class="language-shell-session">$ dune build fserv.exe
ocamlc .fserv.eobjs/byte/dune__exe__Fserv.{cmi,cmo,cmt}
ocamlopt .fserv.eobjs/native/dune__exe__Fserv.{cmx,o}
ocamlopt fserv.exe
/usr/bin/ld: /usr/lib/gcc/x86_64-linuxgnu/10/../../../x86_64-linux-gnu/libcrypto.a(dso_dlfcn.o): in function `dlfcn_globallookup':
(.text+0x13): warning: Using 'dlopen' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking
/usr/bin/ld: ~/.opam/4.11.0/lib/ocaml/libunix.a(initgroups.o): in function `unix_initgroups':
initgroups.c:(.text.unix_initgroups+0x1f): warning: Using 'initgroups' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking
[...]
$ file _build/default/fserv.exe
_build/default/fserv.exe: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, BuildID[sha1]=9ee3ae1c24fbc291d1f580bc7aaecba2777ee6c2, for GNU/Linux 3.2.0, with debug_info, not stripped
$ ldd _build/default/fserv.exe
not a dynamic executable
</code></pre>
<p>The executable was generated... and the result <em>seems</em> OK, but we shouldn't skip all these <code>ld</code> warnings. Basically, what <code>ld</code> is telling us is that you shouldn't statically link <code>glibc</code> (it internally uses dynlinking, to libraries that also need <code>glibc</code> functions, and will therefore <strong>still</strong> need to dynlink a second version from the system 🤯).</p>
<p>Indeed here, we have been statically linking a dynamic linking engine, among other things. Don't do it.</p>
<h3>Linux solution: linking with musl instead of glibc</h3>
<p>The easiest workaround at this point, on Linux, is to compile with <a href="http://musl.libc.org/">musl</a>, which is basically a glibc replacement that can be statically linked. There are some OCaml and gcc variants to automatically use musl (comments welcome if you have been successful with them!), but I have found the simplest option is to use a tiny Alpine Linux image through a Docker container. Here we'll use OCamlPro's <a href="https://hub.docker.com/r/ocamlpro/ocaml">minimal Docker images</a> but anything based on musl should do.</p>
<pre><code class="language-shell-session">$ docker run --rm -it ocamlpro/ocaml:4.12
[...]
~/fserv $ sudo apk add openssl-libs-static
(1/1) Installing openssl-libs-static (1.1.1l-r0)
OK: 161 MiB in 52 packages
~/fserv $ opam switch create . --deps ocaml-system
[...]
~/fserv $ eval $(opam env)
~/fserv $ dune build fserv.exe
~/fserv $ file _build/default/fserv.exe
_build/default/fserv.exe: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, with debug_info, not stripped
~/fserv $ ldd _build/default/fserv.exe
/lib/ld-musl-x86_64.so.1 (0x7ff41353f000)
</code></pre>
<p>Almost there! We see that we had to install extra packages with <code>apk add</code>: the static libraries might not be already installed and in this case are in a separate package (you would get <code>bin/ld: cannot find -lssl</code>). The last remaining dynamic loader in the output of <code>ldd</code> is because static PIE executable were not supported <a href="https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81498#c1">until recently</a>. To get rid of it, we just need to add <code>-cclib -no-pie</code> (note: a previous revision of this blogpost mentionned <code>-static-pie</code> instead, which may work with recent compilers, but didn't seem to give reliable results):</p>
<pre><code class="language-lisp">(executable
(public_name fserv)
(flags (:standard -cclib -static -cclib -no-pie))
(libraries dream))
</code></pre>
<p>And we are good!</p>
<pre><code class="language-shell-session">~/fserv $ file _build/default/fserv.exe
_build/default/fserv.exe: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, with debug_info, not stripped
~/fserv $ ldd _build/default/fserv.exe
/lib/ld-musl-x86_64.so.1: _build/default/fserv.exe: Not a valid dynamic program
</code></pre>
<blockquote>
<p><strong>Trick</strong>: short script to compile through a Docker container</p>
<p>Passing the context to a Docker container and getting the artefacts back can be bothersome and often causes file ownership issues, so I use the following snippet to pipe them to/from it using <code>tar</code>:</p>
<pre><code class="language-bash">git ls-files -z | xargs -0 tar c |
docker run --rm -i ocamlpro/ocaml:4.12
sh -uexc
'{ tar x &&
opam switch create . ocaml-system --deps-only --locked &&
opam exec -- dune build --profile=release @install;
} >&2 && tar c -hC _build/install/default/bin .' |
tar vx
</code></pre>
</blockquote>
<h3>The other cases: turning to manual linking</h3>
<p>Sometimes you can't use the above: the automatic linking options may need to be tweaked for static libraries, your app may still need dynlinking support at some point, or you may not have the musl option. On macOS, for example, the libc doesn't have a static version at all (and the <code>-static</code> option of <code>ld</code> is explicitely "only used building the kernel"). Let's get our hands dirty and see how to use a mixed static/dynamic linking scheme. First, we examine how OCaml does the linking usually:</p>
<p>The linking options are passed automatically by OCaml, using information that is embedded in the <code>cm(x)a</code> files, for example:</p>
<pre><code class="language-shell-session">$ ocamlobjinfo $(opam var lwt:lib)/unix/lwt_unix.cma |head
File ~/.opam/4.11.0/lib/lwt/unix/lwt_unix.cma
Force custom: no
Extra C object files: -llwt_unix_stubs -lev -lpthread
Extra C options:
Extra dynamically-loaded libraries: -llwt_unix_stubs
Unit name: Lwt_features
Interfaces imported:
c21c5d26416461b543321872a551ea0d Stdlib
1372e035e54f502dcc3646993900232f Lwt_features
3a3ca1838627f7762f49679ce0278ad1 CamlinternalFormatBasics
</code></pre>
<p>Now the linking flags, here <code>-llwt_unix_stubs -lev -lpthread</code> let the C compiler choose the best way to link; in the case of stubs, they will be static (using the <code>.a</code> files — unless you make special effort to use dynamic ones), but <code>-lev</code> will let the system linker select the shared library, because it is generally preferred. Gathering these flags by hand would be tedious: my preferred trick is to just add the <code>-verbose</code> flag to OCaml (for the lazy, you can just set — temporarily — <code>OCAMLPARAM=_,verbose=1</code>):</p>
<pre><code class="language-lisp">(executable
(public_name fserv)
(flags (:standard -verbose))
(libraries dream))
</code></pre>
<pre><code class="language-shell-session">$ dune build
ocamlc .fserv.eobjs/byte/dune__exe__Fserv.{cmi,cmo,cmt}
ocamlopt .fserv.eobjs/native/dune__exe__Fserv.{cmx,o}
+ as -o '.fserv.eobjs/native/dune__exe__Fserv.o' '/tmp/build8eb7e5.dune/camlasm91a0b9.s'
ocamlopt fserv.exe
+ as -o '/tmp/build8eb7e5.dune/camlstartupc9267f.o' '/tmp/build8eb7e5.dune/camlstartup1d9915.s'
+ gcc -O2 -fno-strict-aliasing -fwrapv -Wall -Wdeclaration-after-statement -fno-common -fexcess-precision=standard -fno-tree-vrp -ffunction-sections -D_FILE_OFFSET_BITS=64 -D_REENTRANT -DCAML_NAME_SPACE -Wl,-E -o 'fserv.exe' '-L~/.opam/4.11.0/lib/bigstringaf' '-L~/.opam/4.11.0/lib/ocaml' '-L~/.opam/4.11.0/lib/ocaml' '-L~/.opam/4.11.0/lib/ocaml' '-L~/.opam/4.11.0/lib/lwt/unix' '-L~/.opam/4.11.0/lib/cstruct' '-L~/.opam/4.11.0/lib/mirage-crypto' '-L~/.opam/4.11.0/lib/mirage-crypto-rng/unix' '-L~/.opam/4.11.0/lib/mtime/os' '-L~/.opam/4.11.0/lib/digestif/c' '-L~/.opam/4.11.0/lib/bigarray-overlap/stubs' '-L~/.opam/4.11.0/lib/ocaml' '-L~/.opam/4.11.0/lib/ssl' '-L~/.opam/4.11.0/lib/ocaml' '/tmp/build8eb7e5.dune/camlstartupc9267f.o' '~/.opam/4.11.0/lib/ocaml/std_exit.o' '.fserv.eobjs/native/dune__exe__Fserv.o' '~/.opam/4.11.0/lib/dream/dream.a' '~/.opam/4.11.0/lib/dream/sql/dream__sql.a' '~/.opam/4.11.0/lib/dream/http/dream__http.a' '~/.opam/4.11.0/lib/dream/websocketaf/websocketaf.a' '~/.opam/4.11.0/lib/dream/httpaf-lwt-unix/httpaf_lwt_unix.a' '~/.opam/4.11.0/lib/dream/httpaf-lwt/httpaf_lwt.a' '~/.opam/4.11.0/lib/dream/h2-lwt-unix/h2_lwt_unix.a' '~/.opam/4.11.0/lib/dream/h2-lwt/h2_lwt.a' '~/.opam/4.11.0/lib/dream/h2/h2.a' '~/.opam/4.11.0/lib/psq/psq.a' '~/.opam/4.11.0/lib/dream/httpaf/httpaf.a' '~/.opam/4.11.0/lib/dream/hpack/hpack.a' '~/.opam/4.11.0/lib/dream/gluten-lwt-unix/gluten_lwt_unix.a' '~/.opam/4.11.0/lib/lwt_ssl/lwt_ssl.a' '~/.opam/4.11.0/lib/ssl/ssl.a' '~/.opam/4.11.0/lib/dream/gluten-lwt/gluten_lwt.a' '~/.opam/4.11.0/lib/faraday-lwt-unix/faraday_lwt_unix.a' '~/.opam/4.11.0/lib/faraday-lwt/faraday_lwt.a' '~/.opam/4.11.0/lib/dream/gluten/gluten.a' '~/.opam/4.11.0/lib/faraday/faraday.a' '~/.opam/4.11.0/lib/dream/localhost/dream__localhost.a' '~/.opam/4.11.0/lib/dream/graphql/dream__graphql.a' '~/.opam/4.11.0/lib/ocaml/str.a' '~/.opam/4.11.0/lib/graphql-lwt/graphql_lwt.a' '~/.opam/4.11.0/lib/graphql/graphql.a' '~/.opam/4.11.0/lib/graphql_parser/graphql_parser.a' '~/.opam/4.11.0/lib/re/re.a' '~/.opam/4.11.0/lib/dream/middleware/dream__middleware.a' '~/.opam/4.11.0/lib/yojson/yojson.a' '~/.opam/4.11.0/lib/biniou/biniou.a' '~/.opam/4.11.0/lib/easy-format/easy_format.a' '~/.opam/4.11.0/lib/magic-mime/magic_mime_library.a' '~/.opam/4.11.0/lib/fmt/fmt_tty.a' '~/.opam/4.11.0/lib/multipart_form/lwt/multipart_form_lwt.a' '~/.opam/4.11.0/lib/dream/pure/dream__pure.a' '~/.opam/4.11.0/lib/hmap/hmap.a' '~/.opam/4.11.0/lib/multipart_form/multipart_form.a' '~/.opam/4.11.0/lib/rresult/rresult.a' '~/.opam/4.11.0/lib/pecu/pecu.a' '~/.opam/4.11.0/lib/prettym/prettym.a' '~/.opam/4.11.0/lib/bigarray-overlap/overlap.a' '~/.opam/4.11.0/lib/bigarray-overlap/stubs/overlap_stubs.a' '~/.opam/4.11.0/lib/base64/rfc2045/base64_rfc2045.a' '~/.opam/4.11.0/lib/unstrctrd/parser/unstrctrd_parser.a' '~/.opam/4.11.0/lib/unstrctrd/unstrctrd.a' '~/.opam/4.11.0/lib/uutf/uutf.a' '~/.opam/4.11.0/lib/ke/ke.a' '~/.opam/4.11.0/lib/fmt/fmt.a' '~/.opam/4.11.0/lib/base64/base64.a' '~/.opam/4.11.0/lib/digestif/c/digestif_c.a' '~/.opam/4.11.0/lib/stdlib-shims/stdlib_shims.a' '~/.opam/4.11.0/lib/dream/graphiql/dream__graphiql.a' '~/.opam/4.11.0/lib/dream/cipher/dream__cipher.a' '~/.opam/4.11.0/lib/mirage-crypto-rng/lwt/mirage_crypto_rng_lwt.a' '~/.opam/4.11.0/lib/mtime/os/mtime_clock.a' '~/.opam/4.11.0/lib/mtime/mtime.a' '~/.opam/4.11.0/lib/duration/duration.a' '~/.opam/4.11.0/lib/mirage-crypto-rng/unix/mirage_crypto_rng_unix.a' '~/.opam/4.11.0/lib/mirage-crypto-rng/mirage_crypto_rng.a' '~/.opam/4.11.0/lib/mirage-crypto/mirage_crypto.a' '~/.opam/4.11.0/lib/eqaf/cstruct/eqaf_cstruct.a' '~/.opam/4.11.0/lib/eqaf/bigstring/eqaf_bigstring.a' '~/.opam/4.11.0/lib/eqaf/eqaf.a' '~/.opam/4.11.0/lib/cstruct/cstruct.a' '~/.opam/4.11.0/lib/caqti-lwt/caqti_lwt.a' '~/.opam/4.11.0/lib/lwt/unix/lwt_unix.a' '~/.opam/4.11.0/lib/ocaml/threads/threads.a' '~/.opam/4.11.0/lib/ocplib-endian/bigstring/ocplib_endian_bigstring.a' '~/.opam/4.11.0/lib/ocplib-endian/ocplib_endian.a' '~/.opam/4.11.0/lib/mmap/mmap.a' '~/.opam/4.11.0/lib/ocaml/bigarray.a' '~/.opam/4.11.0/lib/ocaml/unix.a' '~/.opam/4.11.0/lib/logs/logs_lwt.a' '~/.opam/4.11.0/lib/lwt/lwt.a' '~/.opam/4.11.0/lib/caqti/caqti.a' '~/.opam/4.11.0/lib/uri/uri.a' '~/.opam/4.11.0/lib/angstrom/angstrom.a' '~/.opam/4.11.0/lib/bigstringaf/bigstringaf.a' '~/.opam/4.11.0/lib/bigarray-compat/bigarray_compat.a' '~/.opam/4.11.0/lib/stringext/stringext.a' '~/.opam/4.11.0/lib/ptime/ptime.a' '~/.opam/4.11.0/lib/result/result.a' '~/.opam/4.11.0/lib/logs/logs.a' '~/.opam/4.11.0/lib/ocaml/stdlib.a' '-lssl_stubs' '-lssl' '-lcrypto' '-lcamlstr' '-loverlap_stubs_stubs' '-ldigestif_c_stubs' '-lmtime_clock_stubs' '-lrt' '-lmirage_crypto_rng_unix_stubs' '-lmirage_crypto_stubs' '-lcstruct_stubs' '-llwt_unix_stubs' '-lev' '-lpthread' '-lthreadsnat' '-lpthread' '-lunix' '-lbigstringaf_stubs' '~/.opam/4.11.0/lib/ocaml/libasmrun.a' -lm -ldl
</code></pre>
<p>There is a lot of noise, but the interesting part is at the end, the <code>-l*</code> options before the standard <code>ocaml/libasmrun -lm -ldl</code>:</p>
<pre><code class="language-shell-session"> '-lssl_stubs' '-lssl' '-lcrypto' '-lcamlstr' '-loverlap_stubs_stubs' '-ldigestif_c_stubs' '-lmtime_clock_stubs' '-lrt' '-lmirage_crypto_rng_unix_stubs' '-lmirage_crypto_stubs' '-lcstruct_stubs' '-llwt_unix_stubs' '-lev' '-lpthread' '-lthreadsnat' '-lpthread' '-lunix' '-lbigstringaf_stubs'
</code></pre>
<h4>Manually linking with glibc (Linux)</h4>
<p>To link these statically, but the glibc dynamically:</p>
<ul>
<li>we disable the automatic generation of linking flags by OCaml with <code>-noautolink</code>
</li>
<li>we pass directives to the linker through OCaml and the C compiler, using <code>-cclib -Wl,xxx</code>. <code>-Bstatic</code> makes static linking the preferred option
</li>
<li>we escape the linking flags we extracted above through <code>-cclib</code>
</li>
</ul>
<pre><code class="language-lisp">(executable
(public_name fserv)
(flags (:standard
-noautolink
-cclib -Wl,-Bstatic
-cclib -lssl_stubs -cclib -lssl
-cclib -lcrypto -cclib -lcamlstr
-cclib -loverlap_stubs_stubs -cclib -ldigestif_c_stubs
-cclib -lmtime_clock_stubs -cclib -lrt
-cclib -lmirage_crypto_rng_unix_stubs -cclib -lmirage_crypto_stubs
-cclib -lcstruct_stubs -cclib -llwt_unix_stubs
-cclib -lev -cclib -lthreadsnat
-cclib -lunix -cclib -lbigstringaf_stubs
-cclib -Wl,-Bdynamic
-cclib -lpthread))
(libraries dream))
</code></pre>
<p>Note that <code>-lpthread</code> and <code>-lm</code> are tightly bound to the libc and can't be static in this case, so we moved <code>-lpthread</code> to the end, outside of the static section. The part between the <code>-Bstatic</code> and the <code>-Bdynamic</code> is what will be statically linked, leaving the defaults and the libc dynamic. Result:</p>
<pre><code class="language-shell-session">$ dune build fserv.exe && ldd _build/default/fserv.exe
ocamlc .fserv.eobjs/byte/dune__exe__Fserv.{cmi,cmo,cmt}
ocamlopt .fserv.eobjs/native/dune__exe__Fserv.{cmx,o}
ocamlopt fserv.exe
$ file _build/default/fserv.exe
_build/default/fserv.exe: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=31c93085284da5d74002218b1d6b61c0efbdefe4, for GNU/Linux 3.2.0, with debug_info, not stripped
$ ldd _build/default/fserv.exe
linux-vdso.so.1 (0x00007ffe207c5000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f49d5e56000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f49d5d12000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f49d5d0c000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f49d5b47000)
/lib64/ld-linux-x86-64.so.2 (0x00007f49d69bf000)
</code></pre>
<p>The remaining are the base of the dynamic linking / shared object systems, but we got away with <code>libssl</code>, <code>libcrypto</code> and <code>libev</code>, which were the ones possibly absent from target systems. The resulting executable should work on any glibc-based Linux distribution that is recent enough; on older ones you will likely get missing <code>GLIBC</code> symbols.</p>
<p>If you need to distribute that way, it's a good idea to compile on an old release (like Debian 'oldstable' or 'oldoldstable') for maximum portability.</p>
<h4>Manually linking on macOS</h4>
<p>Unfortunately, the linker on macOS doesn't seem to have options to select the static versions of the libraries; the only solution is to get our hands even dirtier, and link directly to the <code>.a</code> files, instead of using <code>-l</code> arguments.</p>
<p>Most of the flags just link with stubs, we can keep them as is: <code>-lssl_stubs</code> <code>-lcamlstr</code> <code>-loverlap_stubs_stubs</code> <code>-ldigestif_c_stubs</code> <code>-lmtime_clock_stubs</code> <code>-lmirage_crypto_rng_unix_stubs</code> <code>-lmirage_crypto_stubs</code> <code>-lcstruct_stubs</code> <code>-llwt_unix_stubs</code> <code>-lthreadsnat</code> <code>-lunix</code> <code>-lbigstringaf_stubs</code></p>
<p>That leaves us with: <code>-lssl</code> <code>-lcrypto</code> <code>-lev</code> <code>-lpthread</code></p>
<ul>
<li><code>lpthread</code> is built-in, we can ignore it
</li>
<li>for the others, we need to lookup the <code>.a</code> file: I use <em>e.g.</em>
<pre><code class="language-shell-session">$ echo $(pkg-config libssl --variable libdir)/libssl.a
~/brew/Cellar/openssl@1.1/1.1.1k/lib/libcrypto.a
</code></pre>
</li>
</ul>
<p>Of course you don't want to hardcode these paths, but let's test for now:</p>
<pre><code class="language-lisp">(executable
(public_name fserv)
(flags (:standard
-noautolink
-cclib -lssl_stubs -cclib -lcamlstr
-cclib -loverlap_stubs_stubs -cclib -ldigestif_c_stubs
-cclib -lmtime_clock_stubs -cclib -lmirage_crypto_rng_unix_stubs
-cclib -lmirage_crypto_stubs -cclib -lcstruct_stubs
-cclib -llwt_unix_stubs -cclib -lthreadsnat
-cclib -lunix -cclib -lbigstringaf_stubs
-cclib ~/brew/Cellar/openssl@1.1/1.1.1k/lib/libssl.a
-cclib ~/brew/Cellar/openssl@1.1/1.1.1k/lib/libcrypto.a
-cclib ~/brew/Cellar/libev/4.33/lib/libev.a))
(libraries dream))
</code></pre>
<pre><code class="language-shell-session">$ dune build fserv.exe
ocamlc .fserv.eobjs/byte/dune__exe__Fserv.{cmi,cmo,cmt}
ocamlopt .fserv.eobjs/native/dune__exe__Fserv.{cmx,o}
ocamlopt fserv.exe
$ file _build/default/fserv.exe
_build/default/fserv.exe: Mach-O 64-bit executable x86_64
$ otool -L _build/default/fserv.exe
_build/default/fserv.exe:
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1292.60.1)
</code></pre>
<p>This is as good as it will get!</p>
<h2>Cleaning up the build system</h2>
<p>We have until now been adding the linking flags manually in the <code>dune</code> file; you probably don't want to do that and be restricted to static builds only! Not counting the non-portable link options we have been using...</p>
<h3>The quick&dirty way</h3>
<p>Don't use this in your build system! But for quick testing you can conveniently pass flags to the OCaml compilers using the <code>OCAMLPARAM</code> variable. Combined with the tar/docker snippet above, we get a very simple static-binary generating command:</p>
<pre><code class="language-bash">git ls-files -z | xargs -0 tar c |
docker run --rm -i ocamlpro/ocaml:4.12
sh -uexc '{
tar x &&
sudo apk add openssl-libs-static &&
opam switch create . ocaml-system --deps-only --locked &&
OCAMLPARAM=_,cclib=-static,cclib=-no-pie opam exec -- dune build --profile=release @install;
} >&2 && tar c -hC _build/install/default/bin .' |
tar vx
</code></pre>
<p>Note that, for releases, you may also want to <code>strip</code> the generated binaries.</p>
<h3>Making it an option of the build system (with dune)</h3>
<p>For something you will want to commit, I recommend to generate the flags in a separate file <code>linking-flags-fserv.sexp</code>:</p>
<pre><code class="language-lisp">(executable
(public_name fserv)
(flags (:standard (:include linking-flags-fserv.sexp)))
(libraries dream))
</code></pre>
<p>The linking flags will depend on the chosen linking mode and on the OS. For the OS, it's easiest to generate them through a script ; for the linking mode, I use an environment variable to optionally turn static linking on.</p>
<pre><code class="language-lisp">(rule
(with-stdout-to linking-flags-fserv.sexp
(run ./gen-linking-flags.sh %{env:LINKING_MODE=dynamic} %{ocaml-config:system})))
</code></pre>
<p>This will use the following <code>gen-linking-flags.sh</code> script to generate the file, passing it the value of <code>$LINKING_MODE</code> and defaulting to <code>dynamic</code>. Doing it this way also ensures that <code>dune</code> will properly recompile when the value of the environment variable changes.</p>
<pre><code class="language-bash">#!/bin/sh
set -ue
LINKING_MODE="$1"
OS="$2"
FLAGS=
CCLIB=
case "$LINKING_MODE" in
dynamic)
;; # No extra flags needed
static)
case "$OS" in
linux) # Assuming Alpine here
CCLIB="-static -no-pie";;
macosx)
FLAGS="-noautolink"
CCLIB="-lssl_stubs -lcamlstr -loverlap_stubs_stubs
-ldigestif_c_stubs -lmtime_clock_stubs
-lmirage_crypto_rng_unix_stubs -lmirage_crypto_stubs
-lcstruct_stubs -llwt_unix_stubs -lthreadsnat -lunix
-lbigstringaf_stubs"
LIBS="libssl libcrypto libev"
for lib in $LIBS; do
CCLIB="$CCLIB $(pkg-config $lib --variable libdir)/$lib.a"
done;;
*)
echo "No known static compilation flags for '$OS'" >&2
exit 1
esac;;
*)
echo "Invalid linking mode '$LINKING_MODE'" >&2
exit 2
esac
echo '('
for f in $FLAGS; do echo " $f"; done
for f in $CCLIB; do echo " -cclib $f"; done
echo ')'
</code></pre>
<p>Then you'll only have to run <code>LINKING_MODE=static dune build fserv.exe</code> to generate the static executable (wrapped in the Docker script above, in the case of Alpine), and can include that in your CI as well.</p>
<p>For real-world examples, you can check <a href="https://github.com/ocaml-sf/learn-ocaml/blob/master/scripts/static-build.sh">learn-ocaml</a> or <a href="https://github.com/ocaml/opam/blob/master/release/Makefile">opam</a>.</p>
<blockquote>
<h2>Related topics</h2>
<ul>
<li><a href="https://github.com/ocaml/opam/releases/download/2.1.0/opam-2.1.0-x86_64-macos">reproducible builds</a> should be a goal when you intend to distribute pre-compiled binaries.
</li>
<li><a href="https://github.com/AltGr/opam-bundle">opam-bundle</a> is a different, heavy-weight approach to distributing opam software to non-OCaml developers, that retains the "compile all from source" policy but provides one big package that bootstraps OCaml, opam and all the dependencies with a single command.-
</li>
</ul>
</blockquote>
opam 2.1.0 is released!https://ocamlpro.com/blog/2021_08_04_opam_2.1.0_is_released2021-08-04T08:12:13Z2021-08-04T08:12:13Z
David Allsopp (OCamlLabs)
Raja Boujbel
Louis Gesbert
Feedback on this post is welcomed on Discuss! We are happy to announce the release of opam 2.1.0. Many new features made it in (see the pre-release changelogs or release notes for the details), but here are a few highlights. What's new in opam 2.1? Integration of system dependencies (formerly the op...<p><em>Feedback on this post is welcomed on <a href="https://discuss.ocaml.org/t/ann-opam-2-1-0/8255">Discuss</a>!</em></p>
<p>We are happy to announce the release of opam 2.1.0.</p>
<p>Many new features made it in (see the <a href="https://github.com/ocaml/opam/blob/2.1.0/CHANGES">pre-release
changelogs</a> or <a href="https://github.com/ocaml/opam/releases">release
notes</a> for the details),
but here are a few highlights.</p>
<h2>What's new in opam 2.1?</h2>
<ul>
<li>Integration of system dependencies (formerly the opam-depext plugin),
increasing their reliability as it integrates the solving step
</li>
<li>Creation of lock files for reproducible installations (formerly the opam-lock
plugin)
</li>
<li>Switch invariants, replacing the "base packages" in opam 2.0 and allowing for
easier compiler upgrades
</li>
<li>Improved options configuration (see the new <code>option</code> and expanded <code>var</code> sub-commands)
</li>
<li>CLI versioning, allowing cleaner deprecations for opam now and also
improvements to semantics in future without breaking backwards-compatibility
</li>
<li>opam root readability by newer and older versions, even if the format changed
</li>
<li>Performance improvements to opam-update, conflict messages, and many other
areas
</li>
</ul>
<h3>Seamless integration of System dependencies handling (a.k.a. "depexts")</h3>
<p>opam has long included the ability to install system dependencies automatically
via the <a href="https://github.com/ocaml-opam/opam-depext">depext plugin</a>. This plugin
has been promoted to a native feature of opam 2.1.0 onwards, giving the
following benefits:</p>
<ul>
<li>You no longer have to remember to run <code>opam depext</code>, opam always checks
depexts (there are options to disable this or automate it for CI use).
Installation of an opam package in a CI system is now as easy as <code>opam install .</code>, without having to do the dance of <code>opam pin add -n/depext/install</code>. Just
one command now for the common case!
</li>
<li>The solver is only called once, which both saves time and also stabilises the
behaviour of opam in cases where the solver result is not stable. It was
possible to get one package solution for the <code>opam depext</code> stage and a
different solution for the <code>opam install</code> stage, resulting in some depexts
missing.
</li>
<li>opam now has full knowledge of depexts, which means that packages can be
automatically selected based on whether a system package is already installed.
For example, if you have <em>neither</em> MariaDB nor MySQL dev libraries installed,
<code>opam install mysql</code> will offer to install <code>conf-mysql</code> and <code>mysql</code>, but if you
have the MariaDB dev libraries installed, opam will offer to install
<code>conf-mariadb</code> and <code>mysql</code>.
</li>
</ul>
<p><em>Hint: You can set <code>OPAMCONFIRMLEVEL=unsafe-yes</code> or
<code>--confirm-level=unsafe-yes</code> to launch non interactive system package commands.</em></p>
<h3>opam lock files and reproducibility</h3>
<p>When opam was first released, it had the mission of gathering together
scattered OCaml source code to build a <a href="https://github.com/ocaml/opam-repository">community
repository</a>. As time marches on, the
size of the opam repository has grown tremendously, to over 3000 unique
packages with over 19500 unique versions. opam looks at all these packages and
is designed to solve for the best constraints for a given package, so that your
project can keep up with releases of your dependencies.</p>
<p>While this works well for libraries, we need a different strategy for projects
that need to test and ship using a fixed set of dependencies. To satisfy this
use-case, opam 2.0.0 shipped with support for <em>using</em> <code>project.opam.locked</code>
files. These are normal opam files but with exact versions of dependencies. The
lock file can be used as simply as <code>opam install . --locked</code> to have a
reproducible package installation.</p>
<p>With opam 2.1.0, the creation of lock files is also now integrated into the
client:</p>
<ul>
<li><code>opam lock</code> will create a <code>.locked</code> file for your current switch and project,
that you can check into the repository.
</li>
<li><code>opam switch create . --locked</code> can be used by users to reproduce your
dependencies in a fresh switch.
</li>
</ul>
<p>This lets a project simultaneously keep up with the latest dependencies
(without lock files) while providing a stricter set for projects that need it
(with lock files).</p>
<p><em>Hint: You can export the full configuration of a switch with <code>opam switch export</code> new options, <code>--full</code> to have all packages metadata included, and
<code>--freeze</code> to freeze all VCS to their current commit.</em></p>
<h3>Switch invariants</h3>
<p>In opam 2.0, when a switch is created the packages selected are put into the
“base” of the switch. These packages are not normally considered for upgrade,
in order to ease pressure on opam's solver. This was a much bigger concern
early on in opam 2.0's development, but is less of a problem with the default
mccs solver.</p>
<p>However, it's a problem for system compilers. opam would detect that your
system compiler version had changed, but be unable to upgrade the ocaml-system
package unless you went through a slightly convoluted process with
<code>--unlock-base</code>.</p>
<p>In opam 2.1, base packages have been replaced by switch invariants. The switch
invariant is a package formula which must be satisfied on every upgrade and
install. All existing switches' base packages could just be expressed as
<code>package1 & package2 & package3</code> etc. but opam 2.1 recognises many existing
patterns and simplifies them, so in most cases the invariant will be
<code>"ocaml-base-compiler" {= "4.11.1"}</code>, etc. This means that <code>opam switch create my_switch ocaml-system</code> now creates a <em>switch invariant</em> of <code>"ocaml-system"</code>
rather than a specific version of the <code>ocaml-system</code> package. If your system
OCaml package is updated, <code>opam upgrade</code> will seamlessly switch to the new
package.</p>
<p>This also allows you to have switches which automatically install new point
releases of OCaml. For example:</p>
<pre><code class="language-shell-session">opam switch create ocaml-4.11 --formula='"ocaml-base-compiler" {>= "4.11.0" & < "4.12.0~"}' --repos=old=git+https://github.com/ocaml/opam-repository#a11299d81591
opam install utop
</code></pre>
<p>Creates a switch with OCaml 4.11.0 (the <code>--repos=</code> was just to select a version
of opam-repository from before 4.11.1 was released). Now issue:</p>
<pre><code class="language-shell-session">opam repo set-url old git+https://github.com/ocaml/opam-repository
opam upgrade
</code></pre>
<p>and opam 2.1 will automatically offer to upgrade OCaml 4.11.1 along with a
rebuild of the switch. There's not yet a clean CLI for specifying the formula,
but we intend to iterate further on this with future opam releases so that
there is an easier way of saying “install OCaml 4.11.x”.</p>
<p><em>Hint: You can set up a default invariant that will apply for all new switches,
via a specific <code>opamrc</code>. The default one is <code>ocaml >= 4.05.0</code></em></p>
<h3>Configuring opam from the command-line</h3>
<p>Configuring opam is not a simple task: you need to use an <code>opamrc</code> at init
stage, or hack global/switch config file, or use <code>opam config var</code> for
additional variables. To ease that step, and permit a more consistent opam
config tweaking, a new command was added : <code>opam option</code>.</p>
<!--
The new `opam option` command allows to configure several options,
without requiring manual edition of the configuration files.
-->
<p>For example:</p>
<ul>
<li><code>opam option download-jobs</code> gives the global <code>download-jobs</code> value (as it
exists only in global configuration)
</li>
<li><code>opam option jobs=6 --global</code> will set the number of parallel build
jobs opam is allowed to run (along with the associated <code>jobs</code> variable)
</li>
<li><code>opam option depext-run-commands=false</code> disables the use of <code>sudo</code> for
handling system dependencies; it will be replaced by a prompt to run the
installation commands
</li>
<li><code>opam option depext-bypass=m4 --global</code> bypass <code>m4</code> system package check
globally, while <code>opam option depext-bypass=m4 --switch myswitch</code> will only
bypass it in the selected switch
</li>
</ul>
<p>The command <code>opam var</code> is extended with the same format, acting on switch and
global variables.</p>
<p><em>Hint: to revert your changes use <code>opam option <field>=</code>, it will take its
default value.</em></p>
<h3>CLI Versioning</h3>
<p>A new <code>--cli</code> switch was added to the first beta release, but it's only now
that it's being widely used. opam is a complex enough system that sometimes bug
fixes need to change the semantics of some commands. For example:</p>
<ul>
<li><code>opam show --file</code> needed to change behaviour
</li>
<li>The addition of new controls for setting global variables means that the
<code>opam config</code> was becoming cluttered and some things want to move to <code>opam var</code>
</li>
<li><code>opam switch install 4.11.1</code> still works in opam 2.0, but it's really an OPAM
1.2.2 syntax.
</li>
</ul>
<p>Changing the CLI is exceptionally painful since it can break scripts and tools
which themselves need to drive <code>opam</code>. CLI versioning is our attempt to solve
this. The feature is inspired by the <code>(lang dune ...)</code> stanza in <code>dune-project</code>
files which has allowed the Dune project to rename variables and alter
semantics without requiring every single package using Dune to upgrade their
<code>dune</code> files on each release.</p>
<p>Now you can specify which version of opam you expected the command to be run
against. In day-to-day use of opam at the terminal, you wouldn't specify it,
and you'll get the latest version of the CLI. For example: <code>opam var --global</code>
is the same as <code>opam var --cli=2.1 --global</code>. However, if you issue <code>opam var --cli=2.0 --global</code>, you will told that <code>--global</code> was added in 2.1 and so is
not available to you. You can see similar things with the renaming of <code>opam upgrade --unlock-base</code> to <code>opam upgrade --update-invariant</code>.</p>
<p>The intention is that <code>--cli</code> should be used in scripts, user guides (e.g. blog
posts), and in software which calls opam. The only decision you have to take is
the <em>oldest</em> version of opam which you need to support. If your script is using
a new opam 2.1 feature (for example <code>opam switch create --formula=</code>) then you
simply don't support opam 2.0. If you need to support opam 2.0, then you can't
use <code>--formula</code> and should use <code>--packages</code> instead. opam 2.0 does not have the
<code>--cli</code> option, so for opam 2.0 instead of <code>--cli=2.0</code> you should set the
environment variable <code>OPAMCLI</code> to <code>2.0</code>. As with <em>all</em> opam command line
switches, <code>OPAMCLI</code> is simply the equivalent of <code>--cli</code> which opam 2.1 will
pick-up but opam 2.0 will quietly ignore (and, as with other options, the
command line takes precedence over the environment).</p>
<p>Note that opam 2.1 sets <code>OPAMCLI=2.0</code> when building packages, so on the rare
instances where you need to use the <code>opam</code> command in a <em>package</em> <code>build:</code>
command (or in your build system), you <em>must</em> specify <code>--cli=2.1</code> if you're
using new features.</p>
<p>Since 2.1.0~rc2, CLI versioning applies to opam environment variables. The
previous behavior was to ignore unknown or wrongly set environment variable,
while now you will have a warning to let you know that the environment variable
won't be handled by this version of opam.</p>
<p>To ensure not breaking compatibility of some widely used deprecated options,
a <em>default</em> CLI is introduced: when no CLI is specified, those deprecated
options are accepted. It concerns <code>opam exec</code> and <code>opam var</code> subcommands.</p>
<p>There's even more detail on this feature <a href="https://github.com/ocaml/opam/wiki/Spec-for-opam-CLI-versioning">in our
wiki</a>. We're
hoping that this feature will make it much easier in future releases for opam
to make required changes and improvements to the CLI without breaking existing
set-ups and tools.</p>
<p><em>Note: For opam libraries users, since 2.1 environment variable are no more
loaded by the libraries, only by opam client. You need to load them explicitly.</em></p>
<h3>opam root portability</h3>
<p>opam root format changes during opam life-cycle, new field are added or
removed, new files are added ; an older opam version sometimes can no longer
read an upgraded or newly created opam root. opam root format has been updated
to allow new versions of opam to indicate that the root may still be read by
older versions of the opam libraries. A plugin compiled against the 2.0.9 opam
libraries will therefore be able to read information about an opam 2.1 root
(plugins and tools compiled against 2.0.8 are unable to load opam 2.1.0 roots).
It is a <em>read-only</em> best effort access, any attempt to modify the opam root
fails.</p>
<p><em>Hint: for opam libraries users, you can safely load states with
<a href="https://github.com/ocaml/opam/blob/master/src/state/opamStateConfig.mli"><code>OpamStateConfig</code></a>
load functions.</em></p>
<!--
_ change to the opam root format which allows new versions of opam to indicate
that the root may still be read by older versions of the opam libraries. A
plugin compiled against the 2.0.9 opam libraries will therefore be able to read
information about an opam 2.1 root (plugins and tools compiled against 2.0.8
are unable to load opam 2.1.0 roots). _
-->
<p><strong>Tremendous thanks to all involved people, who've developed, tested & retested,
helped with issue reports, comments, feedback...</strong></p>
<h2>Try it!</h2>
<p>In case you plan a possible rollback, you may want to first backup your
<code>~/.opam</code> directory.</p>
<p>The upgrade instructions are unchanged:</p>
<ol>
<li>Either from binaries: run
</li>
</ol>
<pre><code class="language-shell-session">bash -c "sh <(curl -fsSL https://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh) --version 2.1.0"
</code></pre>
<p>or download manually from <a href="https://github.com/ocaml/opam/releases/tag/2.1.0">the Github "Releases" page</a> to your PATH.</p>
<ol start="2">
<li>Or from source, manually: see the instructions in the <a href="https://github.com/ocaml/opam/tree/2.1.0#compiling-this-repo">README</a>.
</li>
</ol>
<p>You should then run:</p>
<pre><code class="language-shell-session">opam init --reinit -ni
</code></pre>
opam 2.0.9 releasehttps://ocamlpro.com/blog/2021_08_03_opam_2.0.9_release2021-08-03T08:12:13Z2021-08-03T08:12:13Z
Raja Boujbel
Louis Gesbert
Feedback on this post is welcomed on Discuss! We are pleased to announce the minor release of opam 2.0.9. This new version contains some back-ported fixes. New features Back-ported ability to load upgraded roots read-only; allows applications compiled with opam-state 2.0.9 to load a root which has b...<p><em>Feedback on this post is welcomed on <a href="https://discuss.ocaml.org/t/ann-opam-2-1-0/8255">Discuss</a>!</em></p>
<p>We are pleased to announce the minor release of <a href="https://github.com/ocaml/opam/releases/tag/2.0.9">opam 2.0.9</a>.</p>
<p>This new version contains some <a href="https://github.com/ocaml/opam/pull/4547">back-ported</a> fixes.</p>
<h2>New features</h2>
<ul>
<li>Back-ported ability to load upgraded roots read-only; allows applications compiled with opam-state 2.0.9 to load a root which has been upgraded to opam 2.1 [<a href="https://github.com/ocaml/opam/issues/4636">#4636</a>]
</li>
<li>macOS sandbox now supports <code>OPAM_USER_PATH_RO</code> for adding a custom read-only directory to the sandbox [<a href="https://github.com/ocaml/opam/issues/4589">#4589</a>, <a href="https://github.com/ocaml/opam/issues/4609">#4609</a>]
</li>
<li><code>OPAMROOT</code> and <code>OPAMSWITCH</code> now reflect the <code>--root</code> and <code>--switch</code> parameters in the package build [<a href="https://github.com/ocaml/opam/issues/4668">#4668</a>]
</li>
<li>When built with opam-file-format 2.1.3+, opam-format 2.0.x displays better errors for newer opam files [<a href="https://github.com/ocaml/opam/issues/4394">#4394</a>]
</li>
</ul>
<h2>Bug fixes</h2>
<ul>
<li>Linux sandbox now mounts <em>host</em> <code>$TMPDIR</code> read-only, then sets the <em>sandbox</em> <code>$TMPDIR</code> to a new separate tmpfs. <strong>Hardcoded <code>/tmp</code> access no longer works if <code>TMPDIR</code> points to another directory</strong> [<a href="https://github.com/ocaml/opam/issues/4589">#4589</a>]
</li>
<li>Stop clobbering <code>DUNE_CACHE</code> in the sandbox script [<a href="https://github.com/ocaml/opam/issues/4535">#4535</a>, fixing <a href="https://github.com/ocaml/dune/issues/4166">ocaml/dune#4166</a>]
</li>
<li>Ctrl-C now correctly terminates builds with bubblewrap; sandbox now requires bubblewrap 0.1.8 or later [<a href="https://github.com/ocaml/opam/issues/4400">#4400</a>]
</li>
<li>Linux sandbox script no longer makes <code>PWD</code> read-write on remove actions [<a href="https://github.com/ocaml/opam/issues/4589">#4589</a>]
</li>
<li>Lint W59 and E60 no longer trigger for packages flagged <code>conf</code> [<a href="https://github.com/ocaml/opam/issues/4549">#4549</a>]
</li>
<li>Reduce the length of temporary file names for pin caching to ease pressure on Windows [<a href="https://github.com/ocaml/opam/issues/4590">#4590</a>]
</li>
<li>Security: correct quoting of arguments when removing switches [<a href="https://github.com/ocaml/opam/issues/4707">#4707</a>]
</li>
<li>Stop advertising the removed option <code>--compiler</code> when creating local switches [<a href="https://github.com/ocaml/opam/issues/4718">#4718</a>]
</li>
<li>Pinning no longer fails if the archive's opam file is malformed [<a href="https://github.com/ocaml/opam/issues/4580">#4580</a>]
</li>
<li>Fish: stop using deprecated <code>^</code> syntax to fix support for Fish 3.3.0+ [<a href="https://github.com/ocaml/opam/issues/4736">#4736</a>]
</li>
</ul>
<hr />
<p>Installation instructions (unchanged):</p>
<ol>
<li>From binaries: run
</li>
</ol>
<pre><code class="language-shell-session">bash -c "sh <(curl -fsSL https://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh) --version 2.0.9"
</code></pre>
<p>or download manually from <a href="https://github.com/ocaml/opam/releases/tag/2.0.9">the Github "Releases" page</a> to your PATH. In this case, don't forget to run <code>opam init --reinit -ni</code> to enable sandboxing if you had version 2.0.0~rc manually installed or to update you sandbox script.</p>
<ol start="2">
<li>From source, using opam:
</li>
</ol>
<pre><code class="language-shell-session">opam update; opam install opam-devel
</code></pre>
<p>(then copy the opam binary to your PATH as explained, and don't forget to run <code>opam init --reinit -ni</code> to enable sandboxing if you had version 2.0.0~rc manually installed or to update your sandbox script)</p>
<ol start="3">
<li>From source, manually: see the instructions in the <a href="https://github.com/ocaml/opam/tree/2.0.9#compiling-this-repo">README</a>.
</li>
</ol>
<p>We hope you enjoy this new minor version, and remain open to <a href="https://github.com/ocaml/opam/issues">bug reports</a> and <a href="https://github.com/ocaml/opam/issues">suggestions</a>.</p>
Detecting identity functions in Flambdahttps://ocamlpro.com/blog/2021_07_16_detecting_identity_functions_in_flambda2021-07-16T08:12:13Z2021-07-16T08:12:13Z
Leo Boitel
In some discussions among OCaml developers around the empty type (PR#9459), some people mused about the possibility of annotating functions with an attribute telling the compiler that the function should be trivial, and always return a value strictly equivalent to its argument.Curious about the feas...<blockquote>
<p>In some discussions among OCaml developers around the empty type (<a href="https://github.com/ocaml/ocaml/issues/9459">PR#9459</a>), some people mused about the possibility of annotating functions with an attribute telling the compiler that the function should be trivial, and always return a value strictly equivalent to its argument.<br>Curious about the feasibility of implementing this feature, we advertised an internship with our compiler team aimed at exploring this subject.<br>We welcomed Léo Boitel during three months to work on this topic, with Vincent Laviron as mentor, and we're proud to let him show off what he has achieved in this post.</p>
</blockquote>
<h3>The problem at hand</h3>
<p>OCaml's strong typing system is one of its main perks: it allows to write safe code thanks to the abstraction it provides. Most of the basic design mistakes will directly result into a typing error and the user cannot mess up the memory as it is automatically handled by the compiler and runtime.</p>
<p>However, these perks keep a power-user from implementing some optimizations, in particular those linked to memory representation as they cannot be accessed directly.</p>
<p>A good example would be this piece of code:</p>
<pre><code class="language-Ocaml">type return = Ok of int | Failure
let id = function
| Some x -> Ok x
| None -> Failure
</code></pre>
<p>In terms of memory representation, this function is indeed the identity function. <code>Some x</code> and <code>Ok x</code> share the same representation (and so do <code>None</code> and <code>Failure</code>). However, this identity is invisible to the user. Even if the user knows the representation is the same, they would need to use this function to avoid a typing error.</p>
<p>Another good example would be this one:</p>
<pre><code class="language-Ocaml">type record = { a:int; b:int }
let id (x,y) = { a = x; b = y }
</code></pre>
<p>Even if those functions are the identity, they come with a cost: not only do they cost a function call, they reallocate the result instead of just returning their argument directly. Detecting those functions would allow us to produce interesting optimizations.</p>
<h3>Hurdles</h3>
<p>If we want to detect identities, we quickly hit the problem of recursive functions: how does one recognize identity in those cases? Can a function be an identity if it doesn't always terminate, or if it never does?</p>
<p>Once we have a good definition of what exactly an identity function is, we still need to prove that an existing function fits the definition. Indeed, we want to ensure the user that this optimization will not change the observable behavior of the program.</p>
<p>We also want to avoid breaking type safety. As an example, the following function:</p>
<pre><code class="language-Ocaml">let rec fake_id = function
| [] -> 0
| t::q -> fake_id (t::q)
</code></pre>
<p>A naive induction proof would allow us to replace this function with the identity, as <code>[]</code> and <code>0</code> share the same memory representation. However, this is unsafe as applying this to a non-empty list would return a list even if this function has an <code>int</code> type (we'll talk more about it later).</p>
<p>To tackle those challenges, we started the internship with a theoretical study that lasted for three fourths of the allocated time and lastly implemented a practical solution in the Flambda representation of the compiler.</p>
<h3>Theoretical results</h3>
<p>We worked on extensions of lambda-calculus (implemented in OCaml) in order to gradually test our ideas in a simpler framework than the full Flambda.</p>
<h4>Pairs</h4>
<p>We started with a lambda-calculus to which we only added the concept of pairs. To prove identities, every function has to be annotated as identity or not. We then prove these annotations by β-reducing the function bodies. After each recursive reduction, we apply a rule that states that a pair made of the first and second projection of a variable is equal to that variable. We do not reduce applications, but we replace them by the argument if the concerned function is annotated as identity.</p>
<p>Using this method, we maintain a reasonable complexity compared to a full β-reduction which would be unrealistic on a big program.</p>
<p>We then add higher-order capabilities by allowing annotations of the form <code>Annotation → Annotation</code>. Functions as <code>List.map</code> can that way be abstracted as <code>Id → Id</code>. Even though this solution doesn't cover every case, most real-world usage are recognized by these patterns.</p>
<h4>Tuple reconstruction</h4>
<p>We then move from just pairs to tuples of arbitrary size. This adds a new problem: if we make a pair out of the first two fields of a variable, this is no longer necessarily that variable, as it may have more than two fields.</p>
<p>We then have two solutions: we can first annotate every projection with the size of the involved tuple to know if we are indeed reconstructing the entire variable. As an example, if we make a pair from the fields of a triplet, we know there is no way to simplify this reconstruction.</p>
<p>Another solution, more ambitious, is to adopt a less restrictive definition of equality and to allow the replacement of <code>(x,y)</code> by <code>(x,y,z)</code>. Indeed, if the variable was typed as a pair, we are guaranteed that the third field will never be accessed. The behavior of the program will therefore never be affected by this extension.</p>
<p>Though this allows to avoid a lot of allocations, it may also increase memory usage in some cases: if the triplet ceases to be used, it won't be deallocated by the Garbage Collector (GC) and the field <code>z</code> will be kept in memory as long as <code>(x,y)</code> is still accessible.</p>
<p>This approach remains interesting to us, as long as it is manually enabled by the user for some specific blocks.</p>
<h4>Recursion</h4>
<p>We now add recursive definitions to our language, through the use of a fixpoint operator.</p>
<p>To prove that a recursive function is the identity, we have to use induction. The main difficulty is to prove that the function indeed terminates to ensure the validity of the induction.</p>
<p>We can separate this into three different levels of safety. The first option is to not prove termination at all, and let the user state which function they know will return. We can then assume the function is the identity and replace its body on that hypothesis. This approach is enough for most practical cases, but its main problem lies in the fact that it allows to write unsafe code (as we've already seen).</p>
<p>Our second option is to limit our induction hypothesis to recursive applications on "smaller" elements than the argument. An element is defined as smaller if it is a projection of the argument or a projection of a smaller element. This is not enough to prove that the function will terminate (the argument might be cyclic, for example) but is enough to ensure type safety. The reason is that any possibly returned value is constructed (as it cannot directly come from a recursive call) and have therefore a defined type. Typing would fail if the function was to return a value that cannot be identified to its argument.</p>
<p>Finally, we may want to establish a perfect equivalence between the function and the identity function before simplifying it. In that case, we propose to create a special annotation for functions that are the identity when applied to a non-cyclical object. We can prove they have this property with the already described induction. The difficulty now lies into applying the simplification only to valid applications: if an object is immutable, wasn't recursively defined and is made of components that also have that property, we can declare that object inductive and simplify applications on it. The inductive state of variables can be propagated during our recursive pass of optimization.</p>
<h3>Block reconstruction</h3>
<p>The representation of blocks in Flambda provides interesting challenges in terms of equality detection, which is often crucial to prove an identity. It is very hard to detect an identical block reconstruction.</p>
<h4>Blocks in Flambda</h4>
<h5>Variants</h5>
<p>The blocks in Flambda come from the existence of variants in OCaml: one type may have several different constructors, as we can see in</p>
<pre><code class="language-Ocaml">type choice = A of int | B of int
</code></pre>
<p>When OCaml is compiled to Flambda, the information used by the constructor is lost and replaced by a tag. The tag is a number contained in the header of the object's memory representation between 0 and 255 that represents which constructor was used. As an example, an element of type <code>choice</code> would have tag <code>0</code> for the <code>A</code> constructor, and <code>1</code> for <code>B</code>.</p>
<p>That tag will be kept at runtime, which will allow for example to implement pattern matching as a simple switch in Flambda, that executes simple comparisons on the tag to know which branch to execute next.</p>
<p>This system complicates our task as Flambda's typing doesn't inform us which type the constructor is supposed to have, and therefore keeps us from easily knowing if two variants are indeed equal.</p>
<h5>Tag generalization</h5>
<p>To complicate things, tags are actually used for any block, meaning tuples, modules or functions (as a matter of fact, almost anything but constant constructors and integers). If the object doesn't have variants, it will usually have tag 0. This tag is never read (as there are no variants to differentiate) but keeps us from simply comparing two tuples, because Flambda will simply see two blocks of unknown tag.</p>
<h5>Inlining</h5>
<p>Finally, this system is optimized by inlining tuples: if a variant has a shape <code>Pair of int * int</code>, it will be often be flattened into a tuple <code>(tag Pair, int, int)</code>.</p>
<p>This also means that variants can have an arbitrary size, which is also unknown in Flambda.</p>
<h4>Existing approach</h4>
<p>A partial solution to the problem already existed in a Pull Request (PR) you can read <a href="https://github.com/ocaml/ocaml/pull/8958">here</a>.</p>
<p>The chosen approach in this PR is the natural one: we use the switch to gain information on the tag of a block, depending on the branch taken. The PR also allows to know the mutability and size of the block in each branch, starting from OCaml (where this information is known as it is explicit in the pattern matching) and propagating the knowledge to Flambda.</p>
<p>This allows to register every block on which a switch is done with their tag, size and mutability. We can then detect if one of them is reconstructed with the use of a <code>Pmakeblock</code> primitive.</p>
<p>Unfortunately, this path has its limits as there are numerous cases where the tag and size could be known without performing a switch on the value. As an example, this doesn't allow the simplification of tuple reconstruction.</p>
<h4>New solution</h4>
<p>Our new solution will have to propagate more information from OCaml into Flambda. This propagation is based on two PRs that already existed for Flambda 2, which annotated in the lambda representation each projection (<code>Pfield</code>) with typing informations. We add <a href="https://github.com/ocaml-flambda/ocaml/commit/fa5de9e64ff1ef04b596270a8107d1f9dac9fb2d">block mutability</a> and <a href="https://github.com/ocaml-flambda/ocaml/pull/53">tag and finally size</a>.</p>
<p>Our first contribution was to translate these PRs to Flambda 1, and to propagate from lambda to Flambda correctly.</p>
<p>We then had access to every necessary information to detect and prove block reconstruction: not only we have a list of blocks that were pattern-matched, we can make a list of partially immutable blocks, meaning blocks for which we know that some fields are immutable.</p>
<p>Here's how we use it:</p>
<h6>Block discovery</h6>
<p>As soon as we find a projection, we verify whether it is done on an immutable block of known size. If so, we add that block to the list of partial blocks. We verify that the information we have on the tag and size are compatible with the already known projections. If all of the fields of the block are known, the block is added to the list of simplifiable blocks.</p>
<p>Of course, we also keep track of known blocks though switches.</p>
<h6>Simplification</h6>
<p>This part is similar to the original PR: when an immutable block is met, we check whether this block is known as simplifiable. In that case we avoid a reallocation.</p>
<p>Compared to the original approach, we also reduced the asymptotic complexity (from quadratic to linear) by registering the association of every projection variable to its index and original block. We also modified some implementation details that could have triggered a bug when associated with our PR.</p>
<h4>Example</h4>
<p>Let's consider this function:</p>
<pre><code class="language-Ocaml">type typ1 = A of int | B of int * int
type typ2 = C of int | D of {x:int; y:int}
let id = function
| A n -> C n
| B (x,y) -> D {x; y}
</code></pre>
<p>The current compiler would produce the resulting Flambda output:</p>
<pre><code>End of middle end:
let_symbol
(camlTest__id_21
(Set_of_closures (
(set_of_closures id=Test.8
(id/5 = fun param/7 ->
(switch*(0,2) param/7
case tag 0:
(let
(Pmakeblock_arg/11 (field 0<{../../test.ml:4,4-7}> param/7)
Pmakeblock/12
(makeblock 0 (int)<{../../test.ml:4,11-14}>
Pmakeblock_arg/11))
Pmakeblock/12)
case tag 1:
(let
(Pmakeblock_arg/15 (field 1<{../../test.ml:5,4-11}> param/7)
Pmakeblock_arg/16 (field 0<{../../test.ml:5,4-11}> param/7)
Pmakeblock/17
(makeblock 1 (int,int)<{../../test.ml:5,17-23}>
Pmakeblock_arg/16 Pmakeblock_arg/15))
Pmakeblock/17)))
free_vars={ } specialised_args={}) direct_call_surrogates={ }
set_of_closures_origin=Test.1])))
(camlTest__id_5_closure (Project_closure (camlTest__id_21, id/5)))
(camlTest (Block (tag 0, camlTest__id_5_closure)))
End camlTest
</code></pre>
<p>Our optimization allows to detect that this function reconstructs a similar block and therefore can simplify it:</p>
<pre><code>End of middle end:
let_symbol
(camlTest__id_21
(Set_of_closures (
(set_of_closures id=Test.7
(id/5 = fun param/7 ->
(switch*(0,2) param/7
case tag 0 (1): param/7
case tag 1 (2): param/7))
free_vars={ } specialised_args={}) direct_call_surrogates={ }
set_of_closures_origin=Test.1])))
(camlTest__id_5_closure (Project_closure (camlTest__id_21, id/5)))
(camlTest (Block (tag 0, camlTest__id_5_closure)))
End camlTest
</code></pre>
<h4>Possible improvements</h4>
<h5>Equality relaxation</h5>
<p>We can use observational equality studied in the theoretical part for block equality in order to avoid more allocations. The implementation is simple:</p>
<p>When a block is created, to know if it will be allocated, the normal course of action is to check if all of its fields are the known projections of another block, with the same index, and if the block sizes are the same. We can just remove that last check.</p>
<p>Implementing this was a bit more tricky because of several practical details. First, we want that optimization to be only triggered on user-annotated blocks, we had to propagate that annotation to Flambda.</p>
<p>Additionally, if we only implement that optimization, numerous optimization cases will be ignored because unused variables are simplified before our optimization pass. As an example, if a function looks like</p>
<pre><code class="language-Ocaml">let loose_id (a,b,c) = (a,b)
</code></pre>
<p>The <code>c</code> variable will be simplified away before reaching Flambda, and there will be no way to prove that <code>(a,b,c)</code> is immutable as its third field could not be. This problem is being solved on Flambda2 thanks to a PR that propagates mutability information for every block, but we didn't have the time necessary to migrate it on Flambda 1.</p>
<h3>Detecting recursive identities</h3>
<p>Now that we can detect block reconstruction, we're left with solving the problem of recursive functions.</p>
<h4>Unsafe approach</h4>
<p>We began the implementation of a pass that contains no termination proof. The idea is to add the proof later, or to authorize non-terminating functions to be simplified as long as they type correctly (see previously in the theory part).</p>
<p>For now, we trust the user to verify these properties manually.</p>
<p>Hence, we modified the function simplification procedure: when a function with a single argument is modified, we first assume that this function is the identity before simplifying its body. We then check whether the result is equivalent to an identity by recursively going through it, so as to cover as many cases as possible (for example in conditional branchings). If it is the case, the function will be replaced by the identity ; otherwise, we go back to a normal simplification, without the induction hypothesis.</p>
<h4>Constant propagation</h4>
<p>We took some time to improve our code that checks whether the body of a function is an identity or not, so as to handle constant values. It propagates identity information we have on an argument during conditional branching.</p>
<p>This way, on a function like</p>
<pre><code class="language-Ocaml">type truc = A | B | C
let id = function
| A -> A
| B -> B
| C -> C
</code></pre>
<p>or even</p>
<pre><code class="language-Ocaml">let id x = if x=0 then 0 else x
</code></pre>
<p>We can successfully detect identity.</p>
<h4>Examples</h4>
<h5>Recursive functions</h5>
<p>We can now detect recursive identities:</p>
<pre><code class="language-Ocaml">let rec listid = function
| t::q -> t::(listid q)
| [] -> []
</code></pre>
<p>Used to compile to:</p>
<pre><code>End of middle end:
let_rec_symbol
(camlTest__listid_5_closure
(Project_closure (camlTest__set_of_closures_20, listid/5)))
(camlTest__set_of_closures_20
(Set_of_closures (
(set_of_closures id=Test.11
(listid/5 = fun param/7 ->
(if param/7 then begin
(let
(apply_arg/13 (field 1<{../../test.ml:9,4-8}> param/7)
apply_funct/14 camlTest__listid_5_closure
Pmakeblock_arg/15
*(apply*&#091;listid/5]<{../../test.ml:9,15-25}> apply_funct/14
apply_arg/13)
Pmakeblock_arg/16 (field 0<{../../test.ml:9,4-8}> param/7)
Pmakeblock/17
(makeblock 0<{../../test.ml:9,12-25}> Pmakeblock_arg/16
Pmakeblock_arg/15))
Pmakeblock/17)
end else begin
(let (const_ptr_zero/27 Const(0a)) const_ptr_zero/27) end))
free_vars={ } specialised_args={}) direct_call_surrogates={ }
set_of_closures_origin=Test.1])))
let_symbol (camlTest (Block (tag 0, camlTest__listid_5_closure)))
End camlTest
</code></pre>
<p>But is now detected as being the identity:</p>
<pre><code>End of middle end:
let_symbol
(camlTest__set_of_closures_20
(Set_of_closures (
(set_of_closures id=Test.13 (listid/5 = fun param/7 -> param/7)
free_vars={ } specialised_args={}) direct_call_surrogates={ }
set_of_closures_origin=Test.1])))
(camlTest__listid_5_closure
(Project_closure (camlTest__set_of_closures_20, listid/5)))
(camlTest (Block (tag 0, camlTest__listid_5_closure)))
End camlTest
</code></pre>
<h5>Unsafe example</h5>
<p>However, we can use the unsafety of the feature to go around the typing system and access a memory address as if it was an integer:</p>
<pre><code class="language-Ocaml">type bugg = A of int*int | B of int
let rec bug = function
| A (a,b) -> (a,b)
| B x -> bug (B x)
let (a,b) = (bug (B 42))
let _ = print_int b
</code></pre>
<p>This function will be simplified to the identity even though the <code>bugg</code> type is not compatible with tuples; trying to project on the second field of variant b will access an undefined part of memory:</p>
<pre><code>$ ./unsafe.out
47423997875612
</code></pre>
<h4>Possible improvements - short term</h4>
<h5>Function annotation</h5>
<p>A theoretically simple thing to add would be to let the choice of applying unsafe optimizations to the user. We lacked the time to do the work of propagating the information to Flambda, but it would not be hard to implement.</p>
<h5>Order on arguments</h5>
<p>For a safer optimization, we could use the idea developed in the theoretical part to make the optimization correct on non-cyclical objects and more importantly give us typing guarantees to avoid the problem we just saw.</p>
<p>To get this guarantee, we would have to change the simplification pass by adding an optional pair of function-argument to the environment. When this option exists, the pair indicates that we are in the body in the process of simplification and that applications on smaller elements can be simplified as identity. Of course, the pass would need to be modified to remember which elements are not smaller than the previous argument.</p>
<h4>Possible improvements - long term</h4>
<h5>Exclusion of cyclical objects</h5>
<p>As described in the theoretical part, we could recursively deduce which objects are cyclical and attempt to remove them from our optimization. The problem is then that instead of having to replace functions by the identity, we need to add a special annotation that represents <code>IdRec</code>.</p>
<p>This amounts to a lot of added implementation complexity when compiling over several files, as we need access to the interface of already compiled files to know when the optimization can be used.</p>
<p>A possibility would be to use .cmx files to store this information when the file is compiled, but that kind of work would have taken too long to be achieved during the internship. Moreover, the practicality of that choice is far from obvious: it would complexify the optimization pass for a small improvement with respect to a version that would be correct on non-cyclical objects and activated through annotations.</p>
Détection de fonctions d’identité dans Flambdahttps://ocamlpro.com/blog/2021_07_15_fr_detection_de_fonctions_didentite_dans_flambda2021-07-15T08:12:13Z2021-07-15T08:12:13Z
Leo Boitel
Au cours de discussions parmi les développeurs OCaml sur le type vide (PR#9459), certains caressaient l’idée d’annoter des fonctions avec un attribut indiquant au compilateur que la fonction devrait être triviale, et toujours renvoyer une valeur strictement équivalente à son argument. Nous ...<blockquote>
<p>Au cours de discussions parmi les développeurs OCaml sur le type vide (<a href="https://github.com/ocaml/ocaml/issues/9459">PR#9459</a>), certains caressaient l’idée d’annoter des fonctions avec un attribut indiquant au compilateur que la fonction devrait être triviale, et toujours renvoyer une valeur strictement équivalente à son argument.
Nous étions curieux de voir si l’implémentation d’une telle fonctionnalité serait possible et nous avons publié une offre de stage pour explorer ce sujet.
L’équipe Compilation d’OCamlPro a ainsi accueilli Léo Boitel durant trois mois pour se consacrer à ce sujet, avec Vincent Laviron pour encadrant. Nous sommes fiers des résultats auxquels Léo a abouti !</p>
<p>Voici ce que Léo en a écrit 🙂</p>
</blockquote>
<h3>Description du problème</h3>
<p>Le typage fort d’OCaml est un de ses grands avantages : il permet d’écrire du code plus sûr grâce à la capacité d’abstraction qu’il offre. La plupart des erreurs de conception se traduiront directement en erreur de typage, et l’utilisateur ne peut pas faire d’erreur avec la manipulation de la mémoire puisqu’elle est entièrement gérée par le compilateur.</p>
<p>Cependant, ces avantages empêchent l’utilisateur de faire certaines optimisations lui-même, en particulier celles liées aux représentations mémoires puisqu’il n’y accède pas directement.</p>
<p>Un cas classique serait le suivant :</p>
<pre><code class="language-Ocaml">type return = Ok of int | Failure
let id = function
| Some x -> Ok x
| None -> Failure
</code></pre>
<p>Cette fonction est une identité, car la représentation mémoire de <code>Some x</code> et de <code>Ok x</code> est la même (idem pour <code>None</code> et <code>Failure</code>). Cependant, l’utilisateur ne le voit pas, et même s’il le voyait, il aurait besoin de cette fonction pour conserver un typage correct.</p>
<p>Un autre exemple serait le suivant:
Another good example would be this one:</p>
<pre><code class="language-Ocaml">type record = { a:int; b:int }
let id (x,y) = { a = x; b = y }
</code></pre>
<p>Même si ces fonctions sont des identités, elles ont un coût : en plus de nous coûter un appel, elles réallouent le résultat au lieu de nous retourner leur argument directement. C’est pourquoi leur détection permettrait des optimisations intéressantes.</p>
<h3>Difficultés</h3>
<p>Si on veut pouvoir détecter les identités, on se heurte rapidement au problème des fonctions récursives : comment définir l’identité pour ces dernières ? Est-ce qu’une fonction peut-être l’identité si elle ne termine pas toujours, voire jamais ?</p>
<p>Une fois qu’on a défini l’identité, le problème est la preuve qu’une fonction est bien l’identité. En effet, on veut garantir à l’utilisateur que cette optimisation ne changera pas le comportement observable du programme.</p>
<p>On veut aussi éviter d’ajouter des failles de sûreté au typage. Par exemple, si on a une fonction de la forme suivante:</p>
<pre><code class="language-Ocaml">let rec fake_id = function
| [] -> 0
| t::q -> fake_id (t::q)
</code></pre>
<p>Une preuve naïve par induction nous ferait remplacer cette fonction par l’identité, car <code>[]</code> et <code>0</code> ont la même représentation mémoire. C’est dangereux car le résultat d’une application à une liste non-vide sera une liste alors qu’il est typé comme un entier (voir exemples plus bas).</p>
<p>Pour résoudre ces problèmes, nous avons commencé par une partie théorique qui a occupé les trois quarts du stage, pour finir par une partie pratique d’implémentation dans Flambda.</p>
<h3>Résultats théoriques</h3>
<p>Pour cette partie, nous avons travaillé sur des extensions de lambda-calcul, implémentées en OCaml, pour pouvoir tester nos idées au fur et à mesure dans un cadre plus simple que Flambda.</p>
<h4>Paires</h4>
<p>Nous avons commencé par un lambda calcul auquel on ajoute seulement des paires. Pour effectuer nos preuves, on annote toutes les fonctions comme des identités ou non. On prouve ensuite ces annotations en β-réduisant le corps des fonctions. Après chaque réduction récursive, on applique une règle qui dit qu’une paire composée des deux projections d’une variable est égale à la variable. On ne réduit pas les applications, mais on les remplace par l’argument si la fonction est annotée comme une identité.</p>
<p>On garde ainsi une complexité raisonnable par rapport à une β-réduction complète qui serait évidemment irréaliste pour de gros programmes.</p>
<p>On passe ensuite à l’ordre supérieur en permettant des annotations de la forme <code>Annotation → Annotation</code>. Les fonctions comme <code>List.map</code> peuvent donc être représentées comme <code>Id → Id</code>. Bien que cette solution ne soit pas complète, elle couvre la grande majorité des cas d’utilisation.</p>
<h4>Reconstruction de tuples</h4>
<p>On passe ensuite des paires aux tuples de taille arbitraire. Cela complexifie le problème : si on construit une paire à partir des projections des deux premiers champs d’une variable, ce n’est pas forcément la variable, puisqu’elle peut avoir plus de champs.</p>
<p>On a alors deux solutions : tout d’abord, on peut annoter les projections avec la taille du tuple pour savoir si on reconstruit la variable en entier. Par exemple, si on reconstruit une paire avec deux projections d’un triplet, on sait qu’on ne peut pas simplifier cette reconstruction.</p>
<p>L’autre solution, plus ambitieuse, est d’adopter une définition moins stricte de l’égalité, et de dire qu’on peut remplacer, par exemple, <code>(x,y)</code> par <code>(x,y,z)</code>. En effet, si la variable a été typée comme une paire, on a la garantie qu’on accédera jamais au champ <code>z</code> de toute façon. Le comportement du programme sera donc le même si on étend la variable avec des champs supplémentaires.</p>
<p>Utiliser l’égalité observationnelle permet d’éviter beaucoup d’allocations, mais elle peut utiliser plus de mémoire dans certains cas : si le triplet cesse d’être utilisé, il ne sera pas désalloué par le Garbage Collector (GC), et le champ <code>z</code> restera donc en mémoire pour rien tant que <code>(x,y)</code> est utilisé.</p>
<p>Cette approche reste intéressante, au moins si on donne la possibilité à l’utilisateur de l’activer manuellement pour certains blocs.</p>
<h4>Récursion</h4>
<p>On ajoute maintenant les définitions récursives à notre langage, par le biais d’un opérateur de point fixe.</p>
<p>Pour prouver qu’une fonction récursive est l’identité, on doit procéder par induction. La difficulté est alors de prouver que la fonction termine, pour que l’induction soit correcte.</p>
<p>On peut distinguer trois niveaux de preuve : la première option est de ne pas prouver la terminaison, et de laisser l’utilisateur choisir les fonctions dont il est sûr qu’elles terminent. On suppose donc que la fonction est l’identité, et on simplifie son corps avec cette hypothèse. Cette approche est suffisante pour la plupart des cas pratiques, mais son problème principal est qu’elle autorise à écrire du code qui casse la sûreté du typage, comme discuté ci-dessus.</p>
<p>La seconde option est de faire notre hypothèse d’induction uniquement sur des applications de la fonction sur des éléments plus “petits” que l’argument. Un élément est défini comme tel s’il est une projection de l’argument, ou une projection d’un élément plus petit. Cela n’est pas suffisant pour prouver que la fonction termine (par exemple si l’argument est cyclique), mais c’est assez pour avoir un typage sûr. En effet, cela implique que toutes les valeurs de retour possibles de la fonction sont construites (puisqu’elles ne peuvent provenir directement d’un appel récursif), et ont donc un type défini. Le typage échouerait donc si la fonction pouvait renvoyer une valeur qui n’est pas identifiable à son argument.</p>
<p>Finalement, on peut vouloir une équivalence observationnelle parfaite entre la fonction et l’identité pour la simplifier. Dans ce cas, la solution que nous proposons est de créer une annotation spéciale pour les fonctions qui sont l’identité quand elles sont appliquées à un objet non cyclique. On peut prouver qu’elles ont cette propriété avec l’induction décrite ci-dessus. La difficulté est ensuite de faire la simplification sur les bonnes applications : si un objet est immutable, n’est pas défini récursivement, et que tous ses sous-objets satisfont cette propriété, on le dit inductif et on peut simplifier les applications sur lui. On propage le statut inductif des objets lors de notre passe récursive d’optimisation.</p>
<p>###Reconstruction de blocs</p>
<p>La représentation des blocs dans Flambda pose des problèmes intéressants pour détecter leur égalité, ce qui est souvent nécessaire pour prouver une identité. En effet, il est difficile de détecter la reconstruction d’un bloc à l’identique.</p>
<h4>Blocs dans Flambda</h4>
<h5>Variants</h5>
<p>The blocks in Flambda come from the existence of variants in OCaml: one type may have several different constructors, as we can see in</p>
<pre><code class="language-Ocaml">type choice = A of int | B of int
</code></pre>
<p>Quand OCaml est compilé vers Flambda, l’information du constructeur utilisé par un objet est perdue, et est remplacée par un tag. Le tag est un nombre contenu dans un entête de la représentation mémoire de l’objet, et est un nombre entre <code>0</code> et <code>255</code> représentant le constructeur de l’objet. Par exemple, un objet de type choice aurait le tag <code>0</code> si c’est un <code>A</code> et <code>1</code> si c’est un <code>B</code>.</p>
<p>Le tag est ainsi présent dans la mémoire à l’exécution, ce qui permet par exemple d’implémenter le pattern matching de OCaml comme un switch en Flambda, qui fait de simples comparaisons sur le tag pour décider quelle branche prendre.</p>
<p>Ce système nous complique la tâche puisque le typage de Flambda ne nous dit pas quel type de constructeur contient un variant, et empêche donc de décider facilement si deux variants sont égaux.</p>
<h5>Généralisation des tags</h5>
<p>Pour plus de complexité, les tags sont en faits utilisés pour tous les blocs, c’est à dire les tuples, les modules, les fonctions (en fait presque toutes les valeurs sauf les entiers et les constructeurs constants). Quand l’objet n’est pas un variant, on lui donne généralement un tag 0. Ce tag n’est donc jamais lu par la suite (puisqu’on ne fait pas de match sur l’objet), mais nous empêche de comparer simplement deux tuples, puisqu’on verra simplement deux objets de tag inconnu en Flambda.</p>
<h5>Inlining</h5>
<p>Enfin, on optimise ce système en inlinant les tuples : si on a un variant de type <code>Pair of int*int</code>, au lieu d’être représenté comme le tag de Pair et une adresse mémoire pointant vers un couple (donc un tag 0 et les deux entiers), le couple est inliné et l’objet est de la forme <code>(tag Pair, entier, entier)</code>.</p>
<p>Cela implique que les variants sont de taille arbitraire, qui est aussi inconnue dans Flambda.</p>
<h4>Approche existante</h4>
<p>Une solution partielle au problème existait déjà dans une Pull Request (PR) disponible <a href="https://github.com/ocaml/ocaml/pull/8958">ici</a>.</p>
<p>L’approche qui y est adoptée est naturelle : on y utilise les switchs pour gagner de l’information sur le tag d’un bloc, en fonction de la branche prise. La PR permet aussi de connaître la mutabilité et la taille du bloc dans chaque branche, en partant de OCaml (où l’information est connue puisque le constructeur est explicite dans le match), et propageant l’information jusqu’à Flambda.</p>
<p>Cela permet d’enregistrer tous les blocs sur lesquels on a fait un switch dans l’environnement, avec leur tag, taille et mutabilité. On peut ensuite détecter si on reconstruit l’un d’entre eux avec la primitive <code>Pmakeblock</code>.</p>
<p>Cette approche est malheureusement limitée puisqu’ils existe de nombreux cas où on pourrait connaître le tag et la taille du bloc sans faire de switch dessus. Par exemple, on ne pourra jamais simplifier une reconstruction de tuple avec cette solution.</p>
<h4>Nouvelle approche</h4>
<p>Notre nouvelle approche commence donc par propager plus d’information depuis OCaml. La propagation est fondée sur deux PR qui existaient sur Flambda 2, et qui annotent dans lambda chaque projection (<code>Pfiel</code>) avec des informations dérivées du typage OCaml. Une ajoute la <a href="https://github.com/ocaml-flambda/ocaml/commit/fa5de9e64ff1ef04b596270a8107d1f9dac9fb2d">mutabilité du bloc</a> et l’autre <a href="https://github.com/ocaml-flambda/ocaml/pull/53">son tag et enfin sa taille</a>.</p>
<p>Notre première contribution a été d’adapter ces PRs à Flambda 1 et de les propager de lambda à Flambda correctement.</p>
<p>Nous avons ensuite les informations nécessaires pour détecter les reconstructions de blocs : en plus d’avoir une liste de blocs sur lesquels on a switché, on crée une liste de blocs partiellement immutables, c’est à dire dont on sait que certains champs sont immutables.</p>
<p>On l’utilise ainsi :</p>
<h6>Découverte de blocs</h6>
<p>Dès qu’on voit une projection, on regarde si elle est faite sur un bloc immutable de taille connue. Si c’est le cas, on ajoute le bloc correspondant aux blocs partiels. On vérifie que l’information qu’on a sur le tag et la taille est compatible avec celle des projections de ce bloc vues précédemment. Si on connaît maintenant tous les champs du bloc, on l’ajoute à notre liste de blocs connus sur lesquels on peut faire des simplifications.</p>
<p>On garde aussi les informations sur les blocs qu’on connaît grâce aux switchs.</p>
<h6>Simplification</h6>
<p>Cette partie est similaire à celle de la PR originale : quand on construit un bloc immutable, on vérifie si on le connaît, et le cas échéant on ne le réalloue pas.</p>
<p>Par rapport à l’approche originale, nous avons aussi réduit la complexité de la PR originale (de quadratique à linéaire), en enregistrant l’association de chaque variable de projection à son index et bloc original. Nous avons aussi modifié des détails de l’implémentation originale qui auraient pu créer un bug lorsque associés à notre PR.</p>
<h4>Exemple</h4>
<p>Considérons cette fonction:</p>
<pre><code class="language-Ocaml">type typ1 = A of int | B of int * int
type typ2 = C of int | D of {x:int; y:int}
let id = function
| A n -> C n
| B (x,y) -> D {x; y}
</code></pre>
<p>Le compilateur actuel produirait le Flambda suivant:</p>
<pre><code>End of middle end:
let_symbol
(camlTest__id_21
(Set_of_closures (
(set_of_closures id=Test.8
(id/5 = fun param/7 ->
(switch*(0,2) param/7
case tag 0:
(let
(Pmakeblock_arg/11 (field 0<{../../test.ml:4,4-7}> param/7)
Pmakeblock/12
(makeblock 0 (int)<{../../test.ml:4,11-14}>
Pmakeblock_arg/11))
Pmakeblock/12)
case tag 1:
(let
(Pmakeblock_arg/15 (field 1<{../../test.ml:5,4-11}> param/7)
Pmakeblock_arg/16 (field 0<{../../test.ml:5,4-11}> param/7)
Pmakeblock/17
(makeblock 1 (int,int)<{../../test.ml:5,17-23}>
Pmakeblock_arg/16 Pmakeblock_arg/15))
Pmakeblock/17)))
free_vars={ } specialised_args={}) direct_call_surrogates={ }
set_of_closures_origin=Test.1])))
(camlTest__id_5_closure (Project_closure (camlTest__id_21, id/5)))
(camlTest (Block (tag 0, camlTest__id_5_closure)))
End camlTest
</code></pre>
<p>Notre amélioration permet de détecter que cette fonction reconstruit des blocs similaires et donc la simplifie:</p>
<pre><code>End of middle end:
let_symbol
(camlTest__id_21
(Set_of_closures (
(set_of_closures id=Test.7
(id/5 = fun param/7 ->
(switch*(0,2) param/7
case tag 0 (1): param/7
case tag 1 (2): param/7))
free_vars={ } specialised_args={}) direct_call_surrogates={ }
set_of_closures_origin=Test.1])))
(camlTest__id_5_closure (Project_closure (camlTest__id_21, id/5)))
(camlTest (Block (tag 0, camlTest__id_5_closure)))
End camlTest
</code></pre>
<h4>Pistes d’amélioration</h4>
<h5>Relâchement de l’égalité</h5>
<p>On peut utiliser l’égalité observationnelle étudiée dans la partie théorique pour l’égalité de blocs, afin d’éviter plus d’allocations. L’implémentation est simple :</p>
<p>Quand on crée un bloc, pour voir si il est alloué, l’approche normale est de regarder si chacun de ses champs est une projection connue d’un autre bloc, a le même index et si les deux blocs sont de même taille. On peut simplement supprimer la dernière vérification.</p>
<p>L’implémentation a été un peu plus difficile que prévu à cause de détails pratiques. Tout d’abord, on veut appliquer cette optimisation uniquement sur certains blocs annotés par l’utilisateur. Il faut donc propager l’annotation jusqu’à Flambda.</p>
<p>De plus, si on se contente d’implémenter l’optimisation, beaucoup de cas seront ignorés car les variables inutilisées sont simplifiées avant notre passe. Par exemple, prenons une fonction de la forme suivante :</p>
<pre><code class="language-Ocaml">let loose_id (a,b,c) = (a,b)
</code></pre>
<p>La variable <code>c</code> sera simplifiée avant d’atteindre Flambda, et on ne pourra donc plus prouver que <code>(a,b,c)</code> est immutable car son troisième champ pourrait ne pas l’être. Ce problème est en passe d’être résolu sur Flambda 2 grâce à une PR qui propage l’information de mutabilité pour tous les blocs, mais nous n’avons pas eu le temps nécessaire pour l’adapter à Flambda 1.</p>
<h3>Détection d’identités récursives</h3>
<p>Maintenant que nous pouvons détecter les reconstructions de blocs, reste à résoudre le problème des fonctions récursives.</p>
<h4>Approche sans garanties</h4>
<p>Nous avons commencé par implémenter une approche qui ne comporte pas de preuve de terminaison. L’idée est de rajouter la preuve ensuite, ou d’autoriser les fonctions qui ne terminent pas toujours à être simplifiées à condition qu’elles soient correctes au niveau du typage (voir section 7 dans la partie théorique).</p>
<p>Ici, on fait confiance à l’utilisateur pour vérifier ces propriétés manuellement.</p>
<p>Nous avons donc modifié la simplification de fonction : quand on simplifie une fonction à un seul argument, on commence par supposer que cette fonction est l’identité avant de simplifier son corps. On vérifie ensuite si le résultat est équivalent à une identité en le parcourant récursivement, pour couvrir le plus de cas possible (par exemple les branchements conditionnels). Si c’est le cas, la fonction est remplacée par l’identité ; sinon, on revient à une simplification classique, sans hypothèse d’induction.</p>
<h4>Propagation de constantes</h4>
<p>Nous avons ensuite amélioré notre fonction qui détermine si le corps d’une fonction est une identité ou non, pour gérer les constantes. Il propage les informations d’égalité qu’on gagne sur l’argument lors des branchements conditionnels.</p>
<p>Ainsi, si on a une fonction de la forme</p>
<pre><code class="language-Ocaml">type truc = A | B | C
let id = function
| A -> A
| B -> B
| C -> C
</code></pre>
<p>ou même</p>
<pre><code class="language-Ocaml">let id x = if x=0 then 0 else x
</code></pre>
<p>on détectera bien que c’est l’identité.</p>
<h4>Exemples</h4>
<h5>Fonctions récursives</h5>
<p>Nous pouvons maintenant détecter les identités récursives :</p>
<pre><code class="language-Ocaml">let rec listid = function
| t::q -> t::(listid q)
| [] -> []
</code></pre>
<p>compilait avant ainsi:</p>
<pre><code>End of middle end:
let_rec_symbol
(camlTest__listid_5_closure
(Project_closure (camlTest__set_of_closures_20, listid/5)))
(camlTest__set_of_closures_20
(Set_of_closures (
(set_of_closures id=Test.11
(listid/5 = fun param/7 ->
(if param/7 then begin
(let
(apply_arg/13 (field 1<{../../test.ml:9,4-8}> param/7)
apply_funct/14 camlTest__listid_5_closure
Pmakeblock_arg/15
*(apply*&#091;listid/5]<{../../test.ml:9,15-25}> apply_funct/14
apply_arg/13)
Pmakeblock_arg/16 (field 0<{../../test.ml:9,4-8}> param/7)
Pmakeblock/17
(makeblock 0<{../../test.ml:9,12-25}> Pmakeblock_arg/16
Pmakeblock_arg/15))
Pmakeblock/17)
end else begin
(let (const_ptr_zero/27 Const(0a)) const_ptr_zero/27) end))
free_vars={ } specialised_args={}) direct_call_surrogates={ }
set_of_closures_origin=Test.1])))
let_symbol (camlTest (Block (tag 0, camlTest__listid_5_closure)))
End camlTest
</code></pre>
<p>On détecte maintenant que c’est l’identité :</p>
<pre><code>End of middle end:
let_symbol
(camlTest__set_of_closures_20
(Set_of_closures (
(set_of_closures id=Test.13 (listid/5 = fun param/7 -> param/7)
free_vars={ } specialised_args={}) direct_call_surrogates={ }
set_of_closures_origin=Test.1])))
(camlTest__listid_5_closure
(Project_closure (camlTest__set_of_closures_20, listid/5)))
(camlTest (Block (tag 0, camlTest__listid_5_closure)))
End camlTest
</code></pre>
<h5>Exemple non sûr</h5>
<p>En revanche, on peut profiter de l’absence de garanties pour contourner le typage, et accéder à une adresse mémoire comme à un entier :</p>
<pre><code class="language-Ocaml">type bugg = A of int*int | B of int
let rec bug = function
| A (a,b) -> (a,b)
| B x -> bug (B x)
let (a,b) = (bug (B 42))
let _ = print_int b
</code></pre>
<p>Cette fonction va être simplifiée vers l’identité alors que le type <code>bugg</code> n’est pas compatible avec le type tuple ; quand on essaie de projeter sur le second champ du variant <code>b</code>, on accède à une partie de la mémoire indéfinie :</p>
<pre><code>$ ./unsafe.out
47423997875612
</code></pre>
<h4>Pistes d’améliorations – court terme</h4>
<h5>Annotation des fonctions</h5>
<p>Une amélioration simple en théorie, serait de laisser le choix à l’utilisateur des fonctions sur lesquelles il veut appliquer ces optimisations qui ne sont pas toujours correctes. Nous n’avons pas eu le temps de faire le travail de propagation de l’information jusqu’à Flambda, mais il ne devrait pas y avoir de difficultés d’implémentation.</p>
<h5>Ordre sur les arguments</h5>
<p>Pour avoir une optimisation plus sûre, on voudrait pouvoir utiliser l’idée développée dans la partie théorique, qui rend l’optimisation correcte sur les objets non cycliques, et surtout qui nous redonne les garanties du typage pour éviter le problème vu dans l’exemple ci-dessus.</p>
<p>Afin d’avoir cette garantie, on veut changer la passe de simplification pour que son environnement contienne une option de couple fonction – argument. Quand cette option existe, le couple indique que nous sommes dans le corps d’une fonction, en train de la simplifier, et donc que les applications de la fonction sur des éléments plus petits que l’argument peuvent être simplifiés en une identité. Bien sûr, on devrait aussi modifier la passe pour se rappeler des éléments qui ne sont pas plus petits que l’argument.</p>
<h4>Pistes d’améliorations – long terme</h4>
<h5>Exclusion des objets cycliques</h5>
<p>Comme décrit dans la partie théorique, on pourrait déduire récursivement quels objets sont cycliques et tenter de les exclure de notre optimisation. Le problème est alors qu’au lieu de remplacer les fonctions par l’identité, on doit avoir une annotation spéciale qui représente <code>IdRec</code>.</p>
<p>Cela devient bien plus complexe à implémenter quand on compile entre plusieurs fichiers, puisqu’on doit alors avoir cette information dans l’interface des fichiers déjà compilés pour pouvoir faire l’optimisation quand c’est nécessaire.</p>
<p>Une piste serait d’utiliser les fichiers .cmx pour enregistrer cette information quand on compile un fichier, mais ce genre d’implémentation était trop longue pour être réalisée pendant le stage. De plus, il n’est même pas évident qu’elle soit un bon choix pratique : elle complexifierait beaucoup l’optimisation pour un avantage faible par rapport à une version correcte sur les objets non cycliques et activée par une annotation de l’utilisateur.</p>
opam 2.1.0~rc2 releasedhttps://ocamlpro.com/blog/2021_06_23_opam_2.1.0_rc2_released2021-06-23T08:12:13Z2021-06-23T08:12:13Z
David Allsopp (OCamlLabs)
Feedback on this post is welcomed on Discuss! The opam team has great pleasure in announcing opam 2.1.0~rc2! The focus since beta4 has been preparing for a world with more than one released version of opam (i.e. 2.0.x and 2.1.x). The release candidate extends CLI versioning further and, under the ho...<p><em>Feedback on this post is welcomed on <a href="https://discuss.ocaml.org/t/ann-opam-2-1-0-rc2/8042">Discuss</a>!</em></p>
<p>The opam team has great pleasure in announcing opam 2.1.0~rc2!</p>
<p>The focus since beta4 has been preparing for a world with more than one released version of opam (i.e. 2.0.x and 2.1.x). The release candidate extends CLI versioning further and, under the hood, includes a big change to the opam root format which allows new versions of opam to indicate that the root may still be read by older versions of the opam libraries. A plugin compiled against the 2.0.9 opam libraries will therefore be able to read information about an opam 2.1 root (plugins and tools compiled against 2.0.8 are unable to load opam 2.1.0 roots).</p>
<p>Please do take this release candidate for a spin! It is available in the Docker images at ocaml/opam on <a href="https://hub.docker.com/r/ocaml/opam/tags">Docker Hub</a> as the opam-2.1 command (or you can <code>sudo ln -f /usr/bin/opam-2.1 /usr/bin/opam</code> in your <code>Dockerfile</code> to switch to it permanently). The release candidate can also be tested via our installation script (see the <a href="https://github.com/ocaml/opam/wiki/How-to-test-an-opam-feature#from-a-tagged-release-including-pre-releases">wiki</a> for more information).</p>
<p>Thank you to anyone who noticed the unannounced first release candidate and tried it out. Between tagging and what would have been announcing it, we discovered an issue with upgrading local switches from earlier alpha/beta releases, and so fixed that for this second release candidate.</p>
<p>Assuming no showstoppers, we plan to release opam 2.1.0 next week. The improvements made in 2.1.0 will allow for a much faster release cycle, and we look forward to posting about the 2.2.0 plans soon!</p>
<h2>Try it!</h2>
<p>In case you plan a possible rollback, you may want to first backup your
<code>~/.opam</code> directory.</p>
<p>The upgrade instructions are unchanged:</p>
<ol>
<li>Either from binaries: run
</li>
</ol>
<pre><code class="language-shell-session">bash -c "sh <(curl -fsSL https://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh) --version 2.1.0~rc2"
</code></pre>
<p>or download manually from <a href="https://github.com/ocaml/opam/releases/tag/2.1.0-rc2">the Github "Releases" page</a> to your PATH.</p>
<ol start="2">
<li>Or from source, manually: see the instructions in the <a href="https://github.com/ocaml/opam/tree/2.1.0-rc2#compiling-this-repo">README</a>.
</li>
</ol>
<p>You should then run:</p>
<pre><code class="language-shell-session">opam init --reinit -ni
</code></pre>
<p>We hope there won't be any, but please report any issues to <a href="https://github.com/ocaml/opam/issues">the bug-tracker</a>.
Thanks for trying it out, and hoping you enjoy!</p>
Tutorial: Format Module of OCamlhttps://ocamlpro.com/blog/2021_05_06_tutorial_format_module_of_ocaml2021-05-06T08:12:13Z2021-05-06T08:12:13Z
OCamlPro
...<p>The <a href="http://caml.inria.fr/pub/docs/manual-ocaml/libref/Format.html">Format</a> module of OCaml is an extremely powerful but unfortunately often poorly used module. </p>
<p>It combines two distinct elements:</p>
<ul><li>pretty-print boxes</li><li>semantic tags</li></ul>
<p>This tutorial aims to demystify much of this module and explain the range of things that you can do with it.</p>
<p><a href="/blog/2020_06_01_fr_tutoriel_format">Read more (in French)</a></p>
Réunion annuelle du Club des utilisateurs d’Alt-Ergo 2021https://ocamlpro.com/blog/2021_04_29_reunion_annuelle_du_club_des_utilisateurs_dalt_ergo_20212021-04-29T08:12:13Z2021-04-29T08:12:13Z
OCamlPro
La troisième réunion annuelle du Club des utilisateurs d’Alt-Ergo a eu lieu le 1er avril ! Cette réunion annuelle est l’endroit idéal pour passer en revue les besoins de chaque partenaire concernant Alt-Ergo. Nous avons eu le plaisir de recevoir nos partenaires pour discuter de la feuille de...<p>La troisième réunion annuelle du Club des utilisateurs d’Alt-Ergo a eu lieu le 1er avril ! Cette réunion annuelle est l’endroit idéal pour passer en revue les besoins de chaque partenaire concernant Alt-Ergo. Nous avons eu le plaisir de recevoir nos partenaires pour discuter de la feuille de route concernant les développements et les améliorations futures d’Alt-Ergo.</p>
<blockquote>
<p>Alt-Ergo est un démonstrateur automatique de formules mathématiques, créé au <a href="https://www.lri.fr/">LRI</a> et développé par OCamlPro depuis 2013. Pour en savoir plus ou rejoindre le Club, visitez le site <a href="https://alt-ergo.ocamlpro.com">https://alt-ergo.ocamlpro.com</a>.</p>
</blockquote>
<p>Notre Club a plusieurs objectifs. Son objectif principal est de garantir la pérennité d’Alt-Ergo en favorisant la collaboration entre les membres du Club et en tissant des liens avec les utilisateurs de méthodes formelles, telle que la communauté Why3. L’une de nos priorités est de définir les besoins des utilisateurs de solveurs de contraintes en étendant Alt-Ergo à de nouveaux domaines tels que le Model Checking, tout en concurrençant les autres solveurs de l’état de l’art au cours de compétitions internationales. Enfin, le dernier objectif du Club est de trouver de nouveaux projets ou contrats pour le développement de fonctionnalités à long terme.</p>
<p>Nous tenons à remercier tous nos membres pour leur soutien : Mitsubishi Electric R&D Centre Europe, AdaCore et le CEA List. Nous souhaitons également mettre en lumière l’équipe de développement <a href="http://why3.lri.fr/">Why3</a> avec laquelle nous travaillons pour améliorer nos outils.</p>
<p>Cette année, de nouveaux points d’intérêts ont été soulevés par nos membres. Dans un premier temps, la génération de modèles, ajoutée à Alt-Ergo suite à la dernière édition, a été utile à la majorité des membres du club. Les points techniques souhaités à présent sont de pouvoir raffiner les contraintes et étudier comment les propager. Dans un second temps a eu lieu la présentation de Dolmen, le parseur/typer qui permettra de ne typer qu’une seule fois les fichiers SMT2 et d’être prêt pour le SMT3. Son intégration à Alt-Ergo est en cours, l’avis des membres du club est enthousiaste sur les apports futurs de l’outil Dolmen à la communauté des solveurs SMT !</p>
<p>Ces fonctionnalités sont désormais nos principales priorités, retrouvez <a href="https://gitlab.ocamlpro.com/OCamlPro/club-alt-ergo_ext/-/blob/master/Planche_Club_Alt-Ergo_Edition2021.pdf?inline=false">les planches</a> présentées à la réunion du Club édition 2021. Pour suivre nos avancement et les nouveautés, n’hésitez pas à lire nos <a href="/blog/category/formal_methods">articles</a> sur notre blog.</p>
New Try-Alt-Ergohttps://ocamlpro.com/blog/2021_03_29_new_try_alt_ergo2021-03-29T08:12:13Z2021-03-29T08:12:13Z
Albin Coquereau
Have you heard about our Try-Alt-Ergo website? Created in 2014 (see our blogpost), the first objective was to facilitate access to our performant SMT Solver Alt-Ergo. Try-Alt-Ergo allows you to write and run your problems in your browser without any server computation. This playground website has be...<p><img src="/blog/assets/img/screenshot_ask_altergo.jpg" alt="" /></p>
<p>Have you heard about our <a href="https://alt-ergo.ocamlpro.com/try.html">Try-Alt-Ergo</a> website? Created in 2014 (see <a href="/blog/2014_07_15_try_alt_ergo_in_your_browser">our blogpost</a>), the first objective was to facilitate access to our performant SMT Solver <a href="https://alt-ergo.ocamlpro.com/">Alt-Ergo</a>. <em>Try-Alt-Ergo allows you to write and run your problems in your browser without any server computation.</em></p>
<p>This playground website has been maintained by OCamlPro for many years, and it's high time to bring it back to life with new updates. We are therefore pleased to announce the new version of the <a href="https://try-alt-ergo.ocamlpro.com/">Try-Alt-Ergo</a> website! In this article, we will first explain what has changed in the back end, and what you can use if you are interested in running your own version of Alt-Ergo on a website, or in an application! And then we will focus on the new front-end of our website, from its interface to its features through its tutorial about the program.* *</p>
<h2><a href="/blog/2021_03_29_new_try_alt_ergo">Try-Alt-Ergo 2014</a></h2>
<p><img src="/blog/assets/img/screenshot_from_2021_03_29.png" alt="" /></p>
<p><a href="https://alt-ergo.ocamlpro.com/try.html">Try-Alt-Ergo</a> was designed to be a powerful and simple tool to use. Its interface was minimalist. It offered three panels, one panel (left) with a text area containing the problem to prove. The centered panel was composed of a button to run Alt-Ergo, load examples, set options. The right panel showed these options, examples and other information. This design lacked some features that have been added to our solver through the years. Features such as models (counter-examples), unsat-core, more options and debug information was missing in this version.</p>
<p>Try-Alt-Ergo did not offer a proper editor (with syntax coloration), a way to save the file problem nor an option to limit the run of the solver with a time limit. Another issue was about the thread. When the solver was called the webpage froze, that behavior was problematic in case of the long run because there was no way to stop the solver.</p>
<h2><a href="/blog/2021_03_29_new_try_alt_ergo">Alt-Ergo 1.30</a></h2>
<p>The 1.30 version of Alt-Ergo was the version used in the back-end to prove problems. Since this version, a lot of improvements have been done in Alt-Ergo. To learn more about these improvements, see our <a href="https://ocamlpro.github.io/alt-ergo/About/changes.html">changelog</a> in the documentation.</p>
<p>Over the years we encountered some difficulties to update the Alt-Ergo version used in Try-Alt-Ergo. We used <a href="https://ocsigen.org/js_of_ocaml/latest/manual/overview">Js_of_ocaml</a> to compile the OCaml code of our solver to be runnable as a JavaScript code. Some libraries were not available in JavaScript and we needed to manually disable them. The lack of automatism leads to a lack of time to update the JavaScript version of Alt-Ergo in Try-Alt-Ergo.</p>
<p>In 2019 we switched our build system to <a href="https://dune.readthedocs.io/en/latest/overview.html">dune</a> which opens the possibility to ease the cross-compilation of Alt-Ergo in JavaScript.</p>
<h2><a href="/blog/2021_03_29_new_try_alt_ergo">New back-end</a></h2>
<p>With some simple modification, we were able to compile Alt-Ergo in JavaScript. This modification is simple enough that this process is now automated in our continuous integration. This will enable us to easily provide a JavaScript version of our Solver for each future version.</p>
<p>Two ways of using our solver in JavaScript are available:</p>
<ul>
<li><code>alt-ergo.js</code>, a JavaScript version of the Alt-Ergo CLI. It can be runned with <code>node</code>: <code>node alt-ergo.js <options> <file></code>. Note that this code is slower than the natively compiled CLI of Alt-Ergo.In our effort to open the SMT world to more people, an npm package is the next steps of this work.
</li>
<li><code>alt-ergo-worker.js</code>, a web worker of Alt-Ergo. This web worker needs JSON file to input file problem, options into Alt-Ergo and to returns its answers:
<ul>
<li>Options are sent as a list of couple <em>name:value</em> like:<code>{"debug":true,"input_format":"Native","steps_bound":100,"sat_solver": "Tableaux","file":"test-file"}</code>You can specify all options used in Alt-Ergo. If some options are missing, the worker uses the default value for these options. For example, if debug is not specified the worker will use its defaults <em>value :false</em>.- Input file is sent as a list of string, with the following format:<code>{ "content": [ "goal g: true"] }</code>
</li>
<li>Alt-Ergo answers can be composed with its results, debug information, errors, warnings …<code>{ "results": [ "File "test-file", line 1, characters 9-13: Valid (0.2070) (0 steps) (goal g) ] ,``"debugs": [ "[Debug][Sat_solver]", "use Tableaux-like solver"] }</code>like the options, a result value like <code>debugs</code> does not contains anything, <code>"debugs": [...]</code> is not returned.
</li>
<li>See the Alt-Ergo <a href="https://ocamlpro.github.io/alt-ergo/Usage/index.html#js-worker">web-worker documentation</a> to learn more on how to use it.
</li>
</ul>
</li>
</ul>
<h2><a href="/blog/2021_03_29_new_try_alt_ergo">New Front-end</a></h2>
<p><img src="/blog/assets/img/screenshot_new_altergo_interface.jpg" alt="" /></p>
<p>The <a href="https://try-alt-ergo.ocamlpro.com">Try-Alt-Ergo</a> has been completely reworked and we added some features:</p>
<ul>
<li>The left panel is still composed in an editor and answers area
<ul>
<li><a href="https://ace.c9.io/">Ace editor</a> with custom syntax coloration (both native and smt-lib2) is now used to make it more pleasant to write your problems.
</li>
</ul>
</li>
<li>A top panel that contains the following buttons:
<ul>
<li><code>Ask Alt-Ergo</code> which retrieves content from the editor and options, launch the web worker and print answers in the defined areas.
</li>
<li><code>Load</code> and <code>Save</code> files.
</li>
<li><code>Documentation</code>, that sends users to the newly added <a href="https://ocamlpro.github.io/alt-ergo/Input_file_formats/Native/index.html">native syntax documentation</a> of Alt-Ergo.
</li>
<li><code>Tutorial</code>, that opens an interactive <a href="https://try-alt-ergo.ocamlpro.com/tuto/tutorial.html">tutorial</a> to introduce you to Alt-Ergo native syntax and program verification.
</li>
</ul>
</li>
</ul>
<p><img src="/blog/assets/img/screenshot_welcome_to_altergo_tutorial.png" alt="" /></p>
<ul>
<li>A right panel composed of tabs:
<ul>
<li><code>Start</code> and <code>About</code> that contains general information about Alt-Ergo, Try-Alt-Ergo and how to use it.
</li>
<li><code>Outputs</code> prints more information than the basic answer area under the editor. In these tabs you can find debugs (long) outputs, unsat-core or models (counter-example) generated by Alt-Ergo.
</li>
<li><code>Options</code> contains every option you can use, such as the time limit / steps limit or to set the format of the input file to prove .
</li>
<li><code>Statistics</code> is still a basic tab that only output axioms used to prove the input problem.
</li>
<li><code>Examples</code> contains some basic examples showing the capabilities of our solver.
</li>
</ul>
</li>
</ul>
<p>We hope you will enjoy this new version of Try-Alt-Ergo, we can't wait to read your feedback!</p>
<p><em>This work was done at OCamlpro.</em></p>
opam 2.0.8 releasehttps://ocamlpro.com/blog/2021_02_08_opam_2.0.8_release2021-02-08T08:12:13Z2021-02-08T08:12:13Z
Raja Boujbel
Louis Gesbert
We are pleased to announce the minor release of opam 2.0.8. This new version contains some backported fixes: Critical for fish users! Don't add . to PATH. [#4078]
Fix sandbox script for newer ccache versions. [#4079 and #4087]
Fix sandbox crash when ~/.cache is a symlink. [#4068]
User modifications ...<p>We are pleased to announce the minor release of <a href="https://github.com/ocaml/opam/releases/tag/2.0.8">opam 2.0.8</a>.</p>
<p>This new version contains some <a href="https://github.com/ocaml/opam/pull/4425">backported</a> fixes:</p>
<ul>
<li><strong>Critical for fish users!</strong> Don't add <code>.</code> to <code>PATH</code>. [<a href="https://github.com/ocaml/opam/issues/4078">#4078</a>]
</li>
<li>Fix sandbox script for newer <code>ccache</code> versions. [<a href="https://github.com/ocaml/opam/issues/4079">#4079</a> and <a href="https://github.com/ocaml/opam/pull/4087">#4087</a>]
</li>
<li>Fix sandbox crash when <code>~/.cache</code> is a symlink. [<a href="https://github.com/ocaml/opam/issues/4068">#4068</a>]
</li>
<li>User modifications to the sandbox script are no longer overwritten by <code>opam init</code>. [<a href="https://github.com/ocaml/opam/pull/4092">#4020</a> & <a href="https://github.com/ocaml/opam/pull/4092">#4092</a>]
</li>
<li>macOS sandbox script always mounts <code>/tmp</code> read-write, regardless of <code>TMPDIR</code> [<a href="https://github.com/ocaml/opam/pull/3742">#3742</a>, addressing <a href="https://github.com/ocaml/opam-repository/issues/13339">ocaml/opam-repository#13339</a>]
</li>
<li><code>pre-</code> and <code>post-session</code> hooks can now print to the console [<a href="https://github.com/ocaml/opam/issues/4359">#4359</a>]
</li>
<li>Switch-specific pre/post sessions hooks are now actually run [<a href="https://github.com/ocaml/opam/issues/4472">#4472</a>]
</li>
<li>Standalone <code>opam-installer</code> now correctly builds from sources [<a href="https://github.com/ocaml/opam/issues/4173">#4173</a>]
</li>
<li>Fix <code>arch</code> variable detection when using 32bit mode on ARM64 and i486 [<a href="https://github.com/ocaml/opam/pull/4462">#4462</a>]
</li>
</ul>
<p>A more complete <a href="https://github.com/ocaml/opam/releases/tag/2.0.8">release note</a> is available.</p>
<hr />
<p>Installation instructions (unchanged):</p>
<ol>
<li>From binaries: run
</li>
</ol>
<pre><code class="language-shell-session">$~ bash -c "sh <(curl -fsSL https://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh) --version 2.0.8"
</code></pre>
<p>or download manually from <a href="https://github.com/ocaml/opam/releases/tag/2.0.8">the Github "Releases" page</a> to your PATH. In this case, don't forget to run <code>opam init --reinit -ni</code> to enable sandboxing if you had version 2.0.0~rc manually installed or to update you sandbox script.</p>
<ol start="2">
<li>From source, using opam:
</li>
</ol>
<pre><code class="language-shell-session">$~ opam update; opam install opam-devel
</code></pre>
<p>(then copy the opam binary to your PATH as explained, and don't forget to run <code>opam init --reinit -ni</code> to enable sandboxing if you had version 2.0.0~rc manually installed or to update you sandbox script)</p>
<ol start="3">
<li>From source, manually: see the instructions in the <a href="https://github.com/ocaml/opam/tree/2.0.8#compiling-this-repo">README</a>.
</li>
</ol>
<p>We hope you enjoy this new minor version, and remain open to <a href="https://github.com/ocaml/opam/issues">bug reports</a> and <a href="https://github.com/ocaml/opam/issues">suggestions</a>.</p>
<blockquote>
<p>NOTE: this article is cross-posted on <a href="https://opam.ocaml.org/blog/">opam.ocaml.org</a> and <a href="/blog">ocamlpro.com</a>, and published in <a href="https://discuss.ocaml.org/t/ann-opam-2-0-8-release/7242">discuss.ocaml.org</a>.</p>
</blockquote>
2020 at OCamlProhttps://ocamlpro.com/blog/2021_02_02_2020_at_ocamlpro2021-02-02T08:12:13Z2021-02-02T08:12:13Z
Muriel
OCamlPro
2020 at OCamlPro OCamlPro was created in 2011 to advocate the adoption of the OCaml language and formal methods in general in the industry. While building a team of highly-skilled engineers, we navigated through our expertise domains, delivering works on the OCaml language and tooling, training comp...<p><img src="/blog/assets/img/logo_2020_at_ocamlpro.png" alt="2020 at OCamlPro" /></p>
<p>OCamlPro was created in 2011 to advocate the adoption of the OCaml language and formal methods in general in the industry. While building a team of highly-skilled engineers, we navigated through our expertise domains, delivering works on the OCaml language and tooling, training companies to the use of strongly-typed languages like OCaml but also Rust, tackling formal verification challenges with formal methods, maintaining <a href="https://alt-ergo.ocamlpro.com">the SMT solver Alt-Ergo</a>, designing languages and tools for smart contracts and blockchains, and more!</p>
<p>In this article, as every year (see <a href="/blog/2020_02_04_2019_at_ocamlpro">2019 at OCamlPro</a> for last year's post), we review some of the work we did during 2020, in many different worlds.</p>
<p><div id="tableofcontents">
<strong>Table of contents</strong></p>
<p> <a href="#ocaml">In the World of OCaml</a></p>
<ul>
<li><a href="#flambda">Flambda & Compilation Team</a>
</li>
<li><a href="#opam">Opam, the OCaml Package Manager</a>
</li>
<li><a href="#community">Encouraging OCaml Adoption: Trainings and Resources for OCaml</a>
</li>
<li><a href="#tooling">Open Source Tooling and Libraries for OCaml</a>
</li>
<li><a href="#foundation">Supporting the OCaml Software Foundation</a>
</li>
<li><a href="#events">Events</a>
</li>
</ul>
<p><a href="#formal-methods">In the World of Formal Methods</a></p>
<ul>
<li><a href="#alt-ergo">Alt-Ergo Development</a>
</li>
<li><a href="#club">Alt-Ergo Users’ Club and R&D Projects</a>
</li>
<li><a href="#roadmap">Alt-Ergo’s Roadmap</a>
</li>
</ul>
<p><a href="#rust">In the World of Rust</a></p>
<p><a href="#blockchains">In the World of Blockchain Languages</a>
</div></p>
<p>We warmly thank all our partners, clients and friends for their support and collaboration during this peculiar year!</p>
<p>The first lockdown was a surprise and we took advantage of this special moment to go over our past contributions and sum it up in a timeline that gives an overview of the key events that made OCamlPro over the years. The <a href="https://timeline.ocamlpro.com">timeline format</a> is amazing to reconnect with our history and to take stock in our accomplishments.</p>
<p>Now this will turn into a generic timeline edition tool on the Web, stay tuned if you are interested in our internal project to be available to the general public! If you think that a timeline would fit your needs and audience, <a href="https://timelines.cc/">we designed a simplistic tool</a>, tailored for users who want complete control over their data.</p>
<h2>
<a id="ocaml" class="anchor"></a><a class="anchor-link" href="#ocaml">In the World of OCaml</a>
</h2>
<h3>
<a id="flambda" class="anchor"></a><a class="anchor-link" href="#flambda">Flambda & Compilation Team</a>
</h3>
<p><em>Work by Pierre Chambart, Vincent Laviron, Guillaume Bury, Pierrick Couderc and Louis Gesbert</em></p>
<p><img src="/blog/assets/img/picture_cpu.jpg" alt="flambda" /></p>
<p>OCamlPro is proud to be working on Flambda2, an ambitious work on an OCaml optimizing compiler, in close collaboration with Mark Shinwell from our long-term partner and client Jane Street. Flambda focuses on reducing the runtime cost of abstractions and removing as many short-lived allocations as possible. In 2020, the Flambda team worked on a considerable number of fixes and improvements, transforming Flambda2 from an experimental prototype to a version ready for testing in production!</p>
<p>This year also marked the conclusion of our work on the pack rehabilitation (see our two recent posts <a href="/blog/2020_09_24_rehabilitating_packs_using_functors_and_recursivity_part_1">Part 1</a> and <a href="/blog/2020_09_30_rehabilitating_packs_using_functors_and_recursivity_part_2">Part 2</a>, and a much simpler <a href="/blog/2011_08_10_packing_and_functors">Version</a> in 2011). Our work aimed to give them a new youth and utility by adding the possibility to generate functors or recursive packs. This improvement allows programmers to define big functors, functors that are split among multiple files, resulting in what we can view as a way to implement some form of parameterized libraries.</p>
<p><em>This work is allowed thanks to Jane Street’s funding.</em></p>
<h3>
<a id="opam" class="anchor"></a><a class="anchor-link" href="#opam">Opam, the OCaml Package Manager</a>
</h3>
<p><em>Work by Raja Boujbel, Louis Gesbert and Thomas Blanc</em></p>
<p><img src="/blog/assets/img/picture_containers.jpg" alt="opam" /></p>
<p><a href="https://opam.ocaml.org/">Opam</a> is the OCaml source-based package manager. The first specification draft was written <a href="https://opam.ocaml.org/about.html">in early 2012</a> and went on to become OCaml’s official package manager — though it may be used for other languages and projects, since Opam is language-agnostic! If you need to install, upgrade and manage your compiler(s), tools and libraries easily, Opam is meant for you. It supports multiple simultaneous compiler installations, flexible package constraints, and a Git-friendly development workflow.</p>
<p><a href="https://github.com/ocaml/opam/releases">Our 2020 work on Opam</a> led to the release of two versions of opam 2.0 with small fixes, and the release of three alphas and two betas of Opam 2.1!</p>
<p>Opam 2.1.0 will soon go to release candidate and will introduce a seamless integration of depexts (system dependencies handling), dependency locking, pinning sub-directories, invariant-based definition for Opam switches, the configuration of Opam from the command-line without the need for a manual edition of the configuration files, and the CLI versioning for better handling of CLI evolutions.</p>
<p><em>This work is greatly helped by Jane Street’s funding and support.</em></p>
<h3>
<a id="community" class="anchor"></a><a class="anchor-link" href="#community">Encouraging OCaml Adoption: Trainings and Resources for OCaml</a>
</h3>
<p><em>Work by Pierre Chambart, Vincent Laviron, Adrien Champion, Mattias, Louis Gesbert and Thomas Blanc</em></p>
<p><img src="/blog/assets/img/picture_ocaml_library.jpg" alt="trainings" /></p>
<p>OCamlPro is also a training centre. We organise yearly training sessions for programmers from multiple companies in our offices: from OCaml to OCaml tooling to Rust! We can also design custom and on-site trainings to meet specific needs.</p>
<p>We released a brand new version of TryOCaml, a tool born from our work on <a href="https://ocaml-sf.org/learn-ocaml/">Learn-OCaml</a>!
<a href="https://try.ocamlpro.com">Try OCaml</a> has been highly praised by professors at the beginning of the Covid lockdown. Even if it can be used as a personal sandbox, it’s also possible to adapt its usage for classes. TryOCaml is a hassle-free tool that lowers significantly the barriers to start coding in OCaml, as no installation is required.</p>
<p>We regularly release cheat sheets for developers: in 2020, we shared <a href="/blog/2020_01_10_opam_2.0_cheat_sheet">the long-awaited Opam 2.0 cheat sheet</a>, with a new theme! In just two pages, you’ll have in one place the everyday commands you need as an Opam user. We also shine some light on unsung features which may just change your coding life.</p>
<p>2020 was also an important year for the OCaml language itself: we were pleased to welcome <a href="https://ocaml.org/releases/4.10.0.html">OCaml 4.10</a>! One of the highlights of the release was the “Best-fit” Garbage Collector Strategy. We had <a href="/blog/2020_03_23_in_depth_look_at_best_fit_gc">an in-depth look</a> at this exciting change.</p>
<p><em>This work is self-funded by OCamlPro as part of its effort to ease the adoption of OCaml.</em></p>
<h3>
<a id="tooling" class="anchor"></a><a class="anchor-link" href="#tooling">Open Source Tooling and Libraries for OCaml</a>
</h3>
<p><em>Work by Fabrice Le Fessant, Léo Andrès and David Declerck</em></p>
<p><img src="/blog/assets/img/picture_tools.jpg" alt="tooling" /></p>
<p>OCamlPro has a long history of developing open source tooling and libraries for the community. 2020 was no exception!</p>
<p><a href="https://github.com/OCamlPro/drom">drom</a> is a simple tool to create new OCaml projects that will use best OCaml practices, i.e. Opam, Dune and tests. Its goal is to provide a cargo-like user experience and helps onboarding new developers in the community. drom is available in the official opam repository.</p>
<p><a href="https://github.com/OCamlPro/directories">directories</a> is a new OCaml Library that provides configuration, cache and data paths (and more!). The library follows the suitable conventions on Linux, MacOS and Windows.</p>
<p><a href="https://ocamlpro.github.io/opam-bin/">opam-bin</a> is a framework to create and use binary packages with Opam. It enables you to create, use and share binary packages easily with opam, and to create as many local switches as you want spending no time, no disk space! If you often use Opam, opam-bin is a must-have!</p>
<p>We also released a number of libraries, focused on making things easy for developers… so we named them with an <code>ez_</code> prefix: <a href="https://github.com/OCamlPro/ez_cmdliner">ez_cmdliner</a> provides an Arg-like interface for cmdliner, <a href="https://github.com/OCamlPro/ez_file">ez_file</a> provides simple functions to read and write files, <a href="https://github.com/OCamlPro/ez_subst">ez_subst</a> provides easily configurable string substitutions for shell-like variable syntax, <a href="https://github.com/OCamlPro/ez_config">ez_config</a> provides abstract options stored in configuration files with an OCaml syntax. There are also a lot of <a href="https://github.com/OCamlPro?q=ezjs">ezjs-*</a> libraries, that are bindings to Javascript libraries that we used in some of our js_of_ocaml projects.</p>
<p>*This work was self-funded by OCamlPro as part of its effort to improve the OCaml ecosystem.*</p>
<h3>
<a id="foundation" class="anchor"></a><a class="anchor-link" href="#foundation">Supporting the OCaml Software Foundation</a>
</h3>
<p>OCamlPro was proud and happy to initiate the <a href="https://www.dropbox.com/s/omba1d8vhljnrcn/OCaml-user-survey-2020.pdf?dl=0">OCaml User Survey 2020</a> as part of the mission of the [OCaml Software Foundation]. The goal of the survey was to better understand the community and its needs. The final results have not yet been published by the Foundation, we are looking forward to reading them soon!</p>
<h3>
<a id="events" class="anchor"></a><a class="anchor-link" href="#events">Events</a>
</h3>
<p>Though the year took its toll on our usual tour of the world conferences and events, OCamlPro members still took part in the annual 72-hour team programming competition organised by the International Conference on Functional Programming (ICFP). Our joint team “crapo on acid” went <a href="https://icfpcontest2020.github.io/#/scoreboard#final">through the final</a>!</p>
<h2>
<a id="formal-methods" class="anchor"></a><a class="anchor-link" href="#formal-methods">In the World of Formal Methods</a>
</h2>
<ul>
<li><em>Work by Albin Coquereau, Mattias, Sylvain Conchon, Guillaume Bury and Louis Rustenholz</em>
</li>
</ul>
<p><img src="/blog/assets/img/altergo-meeting.jpeg" alt="formal methods" /></p>
<p><a href="/blog/2020_06_05_interview_sylvain_conchon_joins_ocamlpro">Sylvain Conchon joined OCamlPro</a> as Formal Methods Chief Scientific Officer in 2020!</p>
<h3>
<a id="alt-ergo" class="anchor"></a><a class="anchor-link" href="#alt-ergo">Alt-Ergo Development</a>
</h3>
<p>OCamlPro develops and maintains <a href="https://alt-ergo.ocamlpro.com/">Alt-Ergo</a>, an automatic solver of mathematical formulas designed for program verification and based on Satisfiability Modulo Theories (SMT). Alt-Ergo was initially created within the <a href="https://vals.lri.fr/">VALS</a> team at <a href="https://www.universite-paris-saclay.fr/en">University of Paris-Saclay</a>.</p>
<p>In 2020, we focused on the maintainability of our solver. The first part of this work was to maintain and fix issues within the already released version. The 2.3.0 (released in 2019) had some issues that needed to be fixed <a href="https://ocamlpro.github.io/alt-ergo/About/changes.html#version-2-3-2-march-23-2020">minor releases</a>.</p>
<p>The second part of the maintainability work on Alt-Ergo contains more major features. All these features were released in the new <a href="https://alt-ergo.ocamlpro.com/#releases">version 2.4.0</a> of Alt-Ergo. The main goal of this release was to focus on the user experience and the documentation. This release also contains bug fixes and many other improvements. Alt-Ergo is on its way towards a new <a href="https://ocamlpro.github.io/alt-ergo/index.html">documentation</a> and in particular a new documentation on its <a href="https://ocamlpro.github.io/alt-ergo/Input_file_formats/Native/index.html">native syntax</a>.</p>
<p>We also tried to improve the command line experience of our tools with the use of the <a href="https://erratique.ch/software/cmdliner">cmdliner library</a> to parse Alt-Ergo options. This library allows us to improve the manpage of our tool. We tried to harmonise the debug messages and to improve all of Alt-Ergo’s outputs to make it clearer for the users.</p>
<h3>
<a id="club" class="anchor"></a><a class="anchor-link" href="#club">Alt-Ergo Users’ Club and R&D Projects</a>
</h3>
<p>We thank our partners from the <a href="https://alt-ergo.ocamlpro.com/#club">Alt-Ergo Users’ Club</a>, Adacore, CEA List, MERCE (Mitsubishi Electric R&D Centre Europe) and Trust-In-Soft, for their trust. Their support allows us to maintain our tool.</p>
<p>The club was launched in 2019 and the second annual meeting of the Alt-Ergo Users’ Club was held in mid-February 2020. Our annual meeting is the perfect place to review each partner’s needs regarding Alt-Ergo. This year, we had the pleasure of receiving our partners to discuss the roadmap for future Alt-Ergo developments and enhancements. If you want to join us for the next meeting (coming soon), contact us!</p>
<p>We also want to thank our partners from the FUI R&D Project LCHIP. Thanks to this project, we were able to add a new major feature in Alt-Ergo: the support for incremental commands (<code>push</code>, <code>pop</code> and <code>check-sat-assuming</code>) from the <a href="https://alt-ergo.ocamlpro.com/#releases">smt-lib2 standard</a>.</p>
<h3>
<a id="roadmap" class="anchor"></a><a class="anchor-link" href="#roadmap">Alt-Ergo’s Roadmap</a>
</h3>
<p>Some of the work we did in 2020 is not yet available. Thanks to our partner MERCE (Mitsubishi Electric R&D Centre Europe), we worked on the SMT model generation. Alt-Ergo is now (partially) able to output a model in the smt-lib2 format. Thanks to the <a href="http://why3.lri.fr/">Why3 team</a> from University of Paris-Saclay, we hope that this work will be available in the Why3 platform to help users in their program verification efforts.</p></p>
<p>Another project was launched in 2020 but is still in early development: the complete rework of our Try-Alt-Ergo website with new features such as model generation. Try Alt-Ergo <a href="https://alt-ergo.ocamlpro.com/try.html">current version</a> allows users to use Alt-Ergo directly from their browsers (Firefox, Chromium) without the need of a server for computations.</p>
<p>This work needed a JavaScript compatible version of Alt-Ergo. We have made some work to build our solver in two versions, one compatible with Node.js and another as a webworker. We hope that this work can make it easier to use our SMT solver in web applications.</p>
<p><em>This work is funded in part by the FUI R&D Project LCHIP, MERCE, Adacore and with the support of the <a href="https://alt-ergo.ocamlpro.com/#club">Alt-Ergo Users’ Club</a>.</em></p>
<h2>
<a id="rust" class="anchor"></a><a class="anchor-link" href="#rust">In the World of Rust</a>
</h2>
<p><em>Work by Adrien Champion</em></p>
<p><img src="/blog/assets/img/logo_rust.jpg" alt="rust" /></p>
<p>As OCaml-ians, we naturally saw in the Rust language a beautiful complement to our approach. One opportunity to explore this state-of-the art language has been to pursue our work on ocp-memprof and build <a href="https://github.com/OCamlPro/memthol">Memthol</a>, a visualizer and analyzer to profile OCaml programs. It works on memory dumps containing information about the size and (de)allocation date of part of the allocations performed by some execution of a program.</p>
<p>Between lockdowns, we’ve also been able to hold <a href="https://training.ocamlpro.com/">our Rust training</a>. It’s designed as a highly-modular vocational course, from 1 to 4 days. The training covers a beginner introduction to Rust’s basics features, crucial features and libraries for real-life development and advanced features, all through complex use-cases one would find in real life.</p>
<p><em>This work was self-funded by OCamlPro as part of our exploration of other statically and strongly typed functional languages.</em></p>
<h2>
<a id="blockchains" class="anchor"></a><a class="anchor-link" href="#blockchains">In the World of Blockchain Languages</a>
</h2>
<p><em>Work by David Declerck and Steven de Oliveira</em></p>
<p><img src="/blog/assets/img/logo_blockchain.jpg" alt="Blockchain languages" /></p>
<p>One of our favourite activities is to develop new programming languages, specialized for specific domains, but with nice properties like clear semantics, strong typing, static typing and functional features. In 2020, we applied our skills in the domain of blockchains and smart contracts, with the creation of a new language, Love, and work on a well-known language, Solidity.</p>
<p>In 2020, our blockchain experts released <a href="https://dune.network/docs/dune-node-next/love-doc/introduction.html">Love</a>, a type-safe language with an ML syntax and suited for formal verification. In a few words, Love is designed to be expressive for fast development, efficient in execution time and cheap in storage, and readable in terms of smart contracts auditability. Yet, it has a clear and formal semantics and a strong type system to detect bugs. It allows contracts to use other contracts as libraries, and to call viewers on other contracts. Contracts developed in Love can also be formally verified.</p>
<p>We also released a <a href="https://solidity.readthedocs.io/en/v0.6.8/">Solidity</a> parser and printer written in OCaml using Menhir, and used it to implement a full interpreter directly in a blockchain. Solidity is probably the most used language for smart contracts, it was first born on Ethereum but many other blockchains provide it as a way to easily onboard new developers coming from the Ethereum ecosystem. In the future, we plan to extend this work with formal verification of Solidity smart contracts.</p>
<p><em>This is a joint effort with <a href="https://www.origin-labs.com/">Origin Labs</a>, the company created to tackle blockchain-related challenges.</em></p>
<p>##Towards 2021##</p>
<p><img src="/blog/assets/img/picture_towards.jpg" alt="towards" /></p>
<p>Adaptability and continuous improvement, that’s what 2020 brought to OCamlPro!</p>
<p>We will remember 2020 as a complicated year, but one that allowed us to surpass ourselves and challenge our projects. We are very proud of our team who all continued to grow, learn, and develop our projects in this particular context. We are more motivated than ever for the coming year, which marks our tenth year anniversary! We’re excited to continue sharing our knowledge of the OCaml world and to accompany you in your own projects.</p>
Release of Alt-Ergo 2.4.0https://ocamlpro.com/blog/2021_01_22_release_of_alt_ergo_2_4_02021-01-22T08:12:13Z2021-01-22T08:12:13Z
Albin Coquereau
A new release of Alt-Ergo (version 2.4.0) is available. You can get it from Alt-Ergo's website. The associated opam package will be published in the next few days. This release contains some major novelties: Alt-Ergo supports incremental commands (push/pop) from the smt-lib standard.
We switched co...<p>A new release of Alt-Ergo (version 2.4.0) is available.</p>
<p>You can get it from <a href="https://alt-ergo.ocamlpro.com/">Alt-Ergo's website</a>. The associated opam package will be published in the next few days.</p>
<p>This release contains some major novelties:</p>
<ul>
<li>Alt-Ergo supports incremental commands (push/pop) from the <a href="https://smtlib.cs.uiowa.edu/">smt-lib </a>standard.
</li>
<li>We switched command line parsing to use <a href="https://erratique.ch/software/cmdliner">cmdliner</a>. You will need to use <code>--<option name></code> instead of <code>-<option name></code>. Some options have also been renamed, see the manpage or the documentation.
</li>
<li>We improved the online documentation of your solver, available <a href="https://ocamlpro.github.io/alt-ergo/">here</a>.
</li>
</ul>
<p>This release also contains some minor novelties:</p>
<ul>
<li><code>.mlw</code> and <code>.why</code> extension are depreciated, the use of <code>.ae</code> extension is advised.
</li>
<li>Add <code>--input</code> (resp <code>--output</code>) option to manually set the input (resp output) file format
</li>
<li>Add <code>--pretty-output</code> option to add better debug formatting and to add colors
</li>
<li>Add exponentiation operation, <code>**</code> in native Alt-Ergo syntax. The operator is fully interpreted when applied to constants
</li>
<li>Fix <code>--steps-count</code> and improve the way steps are counted (AdaCore contribution)
</li>
<li>Add <code>--instantiation-heuristic</code> option that can enable lighter or heavier instantiation
</li>
<li>Reduce the instantiation context (considered foralls / exists) in CDCL-Tableaux to better mimic the Tableaux-like SAT solver
</li>
<li>Multiple bugfixes
</li>
</ul>
<p>The full list of changes is available <a href="https://ocamlpro.github.io/alt-ergo/About/changes.html">here</a>. As usual, do not hesitate to report bugs, to ask questions, or to give your feedback!</p>
opam 2.1.0~beta4 releasedhttps://ocamlpro.com/blog/2021_01_13_opam_2.1.0_beta4_released2021-01-13T08:12:13Z2021-01-13T08:12:13Z
David Allsopp (OCamlLabs)
Feedback on this post is welcomed on Discuss! On behalf of the opam team, it gives me great pleasure to announce the third beta release of opam 2.1. Don’t worry, you didn’t miss beta3 - we had an issue with a configure script that caused beta2 to report as beta3 in some instances, so we skipped ...<p><em>Feedback on this post is welcomed on <a href="https://discuss.ocaml.org/t/ann-opam-2-1-0-beta4/7252">Discuss</a>!</em></p>
<p>On behalf of the opam team, it gives me great pleasure to announce the third beta release of opam 2.1. Don’t worry, you didn’t miss beta3 - we had an issue with a configure script that caused beta2 to report as beta3 in some instances, so we skipped to beta4 to avoid any further confusion!</p>
<p>We encourage you to try out this new beta release: there are instructions for doing so in <a href="https://github.com/ocaml/opam/wiki/How-to-test-an-opam-feature">our wiki</a>. The instructions include taking a backup of your <code>~/.opam</code> root as part of the process, which can be restored in order to wind back. <em>Please note that local switches which are written to by opam 2.1 are upgraded and will need to be rebuilt if you go back to opam 2.0</em>. This can either be done by removing <code>_opam</code> and repeating whatever you use in your build process to create the switch, or you can use <code>opam switch export switch.export</code> to backup the switch to a file before installing new packages. Note that opam 2.1 <em>shouldn’t</em> upgrade a local switch unless you upgrade the base packages (i.e. the compiler).</p>
<h2>What’s new in opam 2.1?</h2>
<ul>
<li>Switch invariants
</li>
<li>Improved options configuration (see the new <code>option</code> and expanded <code>var</code> sub-commands)
</li>
<li>Integration of system dependencies (formerly the opam-depext plugin), increasing their reliability as it integrates the solving step
</li>
<li>Creation of lock files for reproducible installations (formerly the opam-lock plugin)
</li>
<li>CLI versioning, allowing cleaner deprecations for opam now and also improvements to semantics in future without breaking backwards-compatibility
</li>
<li>Performance improvements to opam-update, conflict messages, and many other areas
</li>
<li>New plugins: opam-compiler and opam-monorepo
</li>
</ul>
<h3>Switch invariants</h3>
<p>In opam 2.0, when a switch is created the packages selected are put into the “base” of the switch. These packages are not normally considered for upgrade, in order to ease pressure on opam’s solver. This was a much bigger concern early on in opam 2.0’s development, but is less of a problem with the default mccs solver.</p>
<p>However, it’s a problem for system compilers. opam would detect that your system compiler version had changed, but be unable to upgrade the ocaml-system package unless you went through a slightly convoluted process with <code>--unlock-base</code>.</p>
<p>In opam 2.1, base packages have been replaced by switch invariants. The switch invariant is a package formula which must be satisfied on every upgrade and install. All existing switches’ base packages could just be expressed as <code>package1 & package2 & package3</code> etc. but opam 2.1 recognises many existing patterns and simplifies them, so in most cases the invariant will be <code>"ocaml-base-compiler" {= 4.11.1}</code>, etc. This means that <code>opam switch create my_switch ocaml-system</code> now creates a <em>switch invariant</em> of <code>"ocaml-system"</code> rather than a specific version of the <code>ocaml-system</code> package. If your system OCaml package is updated, <code>opam upgrade</code> will seamlessly switch to the new package.</p>
<p>This also allows you to have switches which automatically install new point releases of OCaml. For example:</p>
<pre><code class="language-shell-session">$~ opam switch create ocaml-4.11 --formula='"ocaml-base-compiler" {>= "4.11.0" & < "4.12.0~"}' --repos=old=git+https://github.com/ocaml/opam-repository#a11299d81591
$~ opam install utop
</code></pre>
<p>Creates a switch with OCaml 4.11.0 (the <code>--repos=</code> was just to select a version of opam-repository from before 4.11.1 was released). Now issue:</p>
<pre><code class="language-shell-session">$~ opam repo set-url old git+https://github.com/ocaml/opam-repository
$~ opam upgrade
</code></pre>
<p>and opam 2.1 will automatically offer to upgrade OCaml 4.11.1 along with a rebuild of the switch. There’s not yet a clean CLI for specifying the formula, but we intend to iterate further on this with future opam releases so that there is an easier way of saying “install OCaml 4.11.x”.</p>
<h3>opam depext integration</h3>
<p>opam has long included the ability to install system dependencies automatically via the <a href="https://github.com/ocaml-opam/opam-depext">depext plugin</a>. This plugin has been promoted to a native feature of opam 2.1.0 onwards, giving the following benefits:</p>
<ul>
<li>You no longer have to remember to run <code>opam depext</code>, opam always checks depexts (there are options to disable this or automate it for CI use). Installation of an opam package in a CI system is now as easy as <code>opam install .</code>, without having to do the dance of <code>opam pin add -n/depext/install</code>. Just one command now for the common case!
</li>
<li>The solver is only called once, which both saves time and also stabilises the behaviour of opam in cases where the solver result is not stable. It was possible to get one package solution for the <code>opam depext</code> stage and a different solution for the <code>opam install</code> stage, resulting in some depexts missing.
</li>
<li>opam now has full knowledge of depexts, which means that packages can be automatically selected based on whether a system package is already installed. For example, if you have <em>neither</em> MariaDB nor MySQL dev libraries installed, <code>opam install mysql</code> will offer to install <code>conf-mysql</code> and <code>mysql</code>, but if you have the MariaDB dev libraries installed, opam will offer to install <code>conf-mariadb</code> and <code>mysql</code>.
</li>
</ul>
<h3>opam lock files and reproducibility</h3>
<p>When opam was first released, it had the mission of gathering together scattered OCaml source code to build a <a href="https://github.com/ocaml/opam-repository">community repository</a>. As time marches on, the size of the opam repository has grown tremendously, to over 3000 unique packages with over 18000 unique versions. opam looks at all these packages and is designed to solve for the best constraints for a given package, so that your project can keep up with releases of your dependencies.</p>
<p>While this works well for libraries, we need a different strategy for projects that need to test and ship using a fixed set of dependencies. To satisfy this use-case, opam 2.0.0 shipped with support for <em>using</em> <code>project.opam.locked</code> files. These are normal opam files but with exact versions of dependencies. The lock file can be used as simply as <code>opam install . --locked</code> to have a reproducible package installation.</p>
<p>With opam 2.1.0, the creation of lock files is also now integrated into the client:</p>
<ul>
<li><code>opam lock</code> will create a <code>.locked</code> file for your current switch and project, that you can check into the repository.
</li>
<li><code>opam switch create . --locked</code> can be used by users to reproduce your dependencies in a fresh switch.
</li>
</ul>
<p>This lets a project simultaneously keep up with the latest dependencies (without lock files) while providing a stricter set for projects that need it (with lock files).</p>
<h3>CLI Versioning</h3>
<p>A new <code>--cli</code> switch was added to the first beta release, but it’s only now that it’s being widely used. opam is a complex enough system that sometimes bug fixes need to change the semantics of some commands. For example:</p>
<ul>
<li><code>opam show --file</code> needed to change behaviour
</li>
<li>The addition of new controls for setting global variables means that the <code>opam config</code> was becoming cluttered and some things want to move to <code>opam var</code>
</li>
<li><code>opam switch install 4.11.1</code> still works in opam 2.0, but it’s really an OPAM 1.2.2 syntax.
</li>
</ul>
<p>Changing the CLI is exceptionally painful since it can break scripts and tools which themselves need to drive <code>opam</code>. CLI versioning is our attempt to solve this. The feature is inspired by the <code>(lang dune ...)</code> stanza in <code>dune-project</code> files which has allowed the Dune project to rename variables and alter semantics without requiring every single package using Dune to upgrade their <code>dune</code> files on each release.</p>
<p>Now you can specify which version of opam you expected the command to be run against. In day-to-day use of opam at the terminal, you wouldn’t specify it, and you’ll get the latest version of the CLI. For example: <code>opam var --global</code> is the same as <code>opam var --cli=2.1 --global</code>. However, if you issue <code>opam var --cli=2.0 --global</code>, you will told that <code>--global</code> was added in 2.1 and so is not available to you. You can see similar things with the renaming of <code>opam upgrade --unlock-base</code> to <code>opam upgrade --update-invariant</code>.</p>
<p>The intention is that <code>--cli</code> should be used in scripts, user guides (e.g. blog posts), and in software which calls opam. The only decision you have to take is the <em>oldest</em> version of opam which you need to support. If your script is using a new opam 2.1 feature (for example <code>opam switch create --formula=</code>) then you simply don’t support opam 2.0. If you need to support opam 2.0, then you can’t use <code>--formula</code> and should use <code>--packages</code> instead. opam 2.0 does not have the <code>--cli</code> option, so for opam 2.0 instead of <code>--cli=2.0</code> you should set the environment variable <code>OPAMCLI</code> to <code>2.0</code>. As with <em>all</em> opam command line switches, <code>OPAMCLI</code> is simply the equivalent of <code>--cli</code> which opam 2.1 will pick-up but opam 2.0 will quietly ignore (and, as with other options, the command line takes precedence over the environment).</p>
<p>Note that opam 2.1 sets <code>OPAMCLI=2.0</code> when building packages, so on the rare instances where you need to use the <code>opam</code> command in a <em>package</em> <code>build:</code> command (or in your build system), you <em>must</em> specify <code>--cli=2.1</code> if you’re using new features.</p>
<p>There’s even more detail on this feature <a href="https://github.com/ocaml/opam/wiki/Spec-for-opam-CLI-versioning">in our wiki</a>. We’re still finalising some details on exactly how <code>opam</code> behaves when <code>--cli</code> is not given, but we’re hoping that this feature will make it much easier in future releases for opam to make required changes and improvements to the CLI without breaking existing set-ups and tools.</p>
<h2>What’s new since the last beta?</h2>
<ul>
<li>opam now uses CLI versioning (<a href="https://github.com/ocaml/opam/pull/4385">#4385</a>)
</li>
<li>opam now exits with code 31 if all failures were during fetch operations (<a href="https://github.com/ocaml/opam/issues/4214">#4214</a>)
</li>
<li><code>opam install</code> now has a <code>--download-only</code> flag (<a href="https://github.com/ocaml/opam/issues/4036">#4036</a>), allowing opam’s caches to be primed
</li>
<li><code>opam init</code> now advises the correct shell-specific command for <code>eval $(opam env)</code> (<a href="https://github.com/ocaml/opam/pull/4427">#4427</a>)
</li>
<li><code>post-install</code> hooks are now allowed to modify or remove installed files (<a href="https://github.com/ocaml/opam/pull/4388">#4388</a>)
</li>
<li>New package variable <code>opamfile-loc</code> with the location of the installed package opam file (<a href="https://github.com/ocaml/opam/pull/4402">#4402</a>)
</li>
<li><code>opam update</code> now has <code>--depexts</code> flag (<a href="https://github.com/ocaml/opam/issues/4355">#4355</a>), allowing the system package manager to update too
</li>
<li>depext support NetBSD and DragonFlyBSD added (<a href="https://github.com/ocaml/opam/pull/4396">#4396</a>)
</li>
<li>The format-preserving opam file printer has been overhauled (<a href="https://github.com/ocaml/opam/issues/3993">#3993</a>, <a href="https://github.com/ocaml/opam/pull/4298">#4298</a> and <a href="https://github.com/ocaml/opam/pull/4302">#4302</a>)
</li>
<li>pins are now fetched in parallel (<a href="https://github.com/ocaml/opam/issues/4315">#4315</a>)
</li>
<li><code>os-family=ubuntu</code> is now treated as <code>os-family=debian</code> (<a href="https://github.com/ocaml/opam/pull/4441">#4441</a>)
</li>
<li><code>opam lint</code> now checks that strings in filtered package formulae are booleans or variables (<a href="https://github.com/ocaml/opam/issues/4439">#4439</a>)
</li>
</ul>
<p>and many other bug fixes as listed <a href="https://github.com/ocaml/opam/releases/tag/2.1.0-beta4">on the release page</a>.</p>
<h2>New Plugins</h2>
<p>Several features that were formerly plugins have been integrated into opam 2.1.0. We have also developed some <em>new</em> plugins that satisfy emerging workflows from the community and the core OCaml team. They are available for use with the opam 2.1 beta as well, and feedback on them should be directed to the respective GitHub trackers for those plugins.</p>
<h3>opam compiler</h3>
<p>The <a href="https://github.com/ocaml-opam/opam-compiler"><code>opam compiler</code></a> plugin can be used to create switches from various sources such as the main opam repository, the ocaml-multicore fork, or a local development directory. It can use Git tag names, branch names, or PR numbers to specify what to install.</p>
<p>Once installed, these are normal opam switches, and one can install packages in them. To iterate on a compiler feature and try opam packages at the same time, it supports two ways to reinstall the compiler: either a safe and slow technique that will reinstall all packages, or a quick way that will just overwrite the compiler in place.</p>
<h3>opam monorepo</h3>
<p>The <a href="https://github.com/ocamllabs/opam-monorepo"><code>opam monorepo</code></a> plugin lets you assemble standalone dune workspaces with your projects and all of their opam dependencies, letting you build it all from scratch using only Dune and OCaml. This satisfies the “monorepo” workflow which is commonly requested by large projects that need all of their dependencies in one place. It is also being used by projects that need global cross-compilation for all aspects of a codebase (including C stubs in packages), such as the MirageOS unikernel framework.</p>
<h2>Next Steps</h2>
<p>This is anticipated to be the final beta in the 2.1 series, and we will be moving to release candidate status after this. We could really use your help with testing this release in your infrastructure and projects and let us know if you run into any blockers. If you have feature requests, please also report them on <a href="https://github.com/ocaml/opam/issues">our issue tracker</a> -- we will be planning the next release cycle once we ship opam 2.1.0 shortly.</p>
Memthol: exploring program profilinghttps://ocamlpro.com/blog/2020_12_01_memthol_exploring_program_profiling2020-12-01T08:12:13Z2020-12-01T08:12:13Z
Adrien Champion
Memthol is a visualizer and analyzer for program profiling. It works on memory dumps containing information about the size and (de)allocation date of part of the allocations performed by some execution of a program. For information regarding building memthol, features, browser compatibility… refer...<p><img src="/blog/assets/img/banner_memprof_banniere_blue.png" alt="" /></p>
<p><em>Memthol</em> is a visualizer and analyzer for program profiling. It works on memory <em>dumps</em> containing information about the size and (de)allocation date of part of the allocations performed by some execution of a program.</p>
<blockquote>
<p>For information regarding building memthol, features, browser compatibility… refer to the <a href="https://github.com/OCamlPro/memthol">memthol github repository</a>. *Please note that Memthol, as a side project, is a work in progress that remains in beta status for now. *</p>
</blockquote>
<p><img src="https://raw.githubusercontent.com/OCamlPro/memthol/master/rsc/example.png" alt="" /></p>
<h4>Memthol's background</h4>
<p>The Memthol work was started more than a year ago (we had published a short introductory paper at the <a href="https://jfla.inria.fr/jfla2020.html">JFLA2020</a>). The whole idea was to use the previous work originally achieved on <a href="https://memprof.typerex.org/">ocp-memprof</a>, and look for some extra funding to achieve a usable and industrial version.Then came the excellent <a href="https://blog.janestreet.com/finding-memory-leaks-with-memtrace/">memtrace profiler</a> by Jane Street's team (congrats!)Memthol is a self-funded side project, that we think it still is worth giving to the OCaml community. Its approach is valuable, and can be complementary. It is released under the free GPL licence v3.</p>
<h4>Memthol's versatility: supporting memtrace's dump format</h4>
<p>The memtrace format is nicely designed and polished enough to be considered a future standard for other tools.This is why Memthol supports Jane Street's <em>dumper</em> format, instead of our own dumper library's.</p>
<h4>Why choose Rust to implement Memthol?</h4>
<p>We've been exploring the Rust language for more than a year now.The Memthol work was the opportunity to further explore this state-of-the-art language. <em>We are open to extra funding, to deepen the Memthol work should industrial users be interested.</em></p>
<h4>Memthol's How-to</h4>
<blockquote>
<p>The following steps are from the <a href="https://ocamlpro.github.io/memthol/mini_tutorial/">Memthol Github howto</a>.</p>
<ul>
<li><strong>1.</strong> <a href="https://ocamlpro.github.io/memthol/mini_tutorial/basics.html">Introduction</a>
</li>
<li><strong>2.</strong> <a href="https://ocamlpro.github.io/memthol/mini_tutorial/charts.html">Basics</a>
</li>
<li><strong>3.</strong> <a href="https://ocamlpro.github.io/memthol/mini_tutorial/global_settings.html">Charts</a>
</li>
<li><strong>4.</strong> <a href="https://ocamlpro.github.io/memthol/mini_tutorial/callstack_filters.html">Global Settings</a>
</li>
<li><strong>5.</strong> <a href="https://ocamlpro.github.io/memthol/mini_tutorial/">Callstack Filters</a>
</li>
</ul>
</blockquote>
<h2>Introduction</h2>
<p>This tutorial deals with the BUI ( <strong>B</strong>rowser <strong>U</strong>ser <strong>I</strong>nterface) aspect of the profiling. How the dumps are generated is outside of the scope of this document. Currently, memthol accepts memory dumps produced by <em>[Memtrace]</em>(https://blog.janestreet.com/finding-memory-leaks-with-memtrace) (github repository <a href="https://github.com/janestreet/memtrace">here</a>). A memtrace dump for a program execution is a single <a href="https://diamon.org/ctf"> <strong>C</strong>ommon <strong>T</strong>race <strong>F</strong>ormat</a> (CTF) file.</p>
<p>This tutorial uses CTF files from the memthol repository. All paths mentioned in the examples are from its root.</p>
<p>Memthol is written in Rust and is composed of</p>
<ul>
<li>a server, written in pure Rust, and
</li>
<li>a client, written in Rust and compiled to web assembly.
</li>
</ul>
<p>The server contains the client, which it will serve at some address on some port when launched.</p>
<h3>Running Memthol</h3>
<p>Memthol must be given a path to a CTF file generated by memtrace.</p>
<pre><code class="language-shell-session">> ls rsc/dumps/ctf/flamba.ctf
rsc/dumps/ctf/flamba.ctf
> memthol rsc/dumps/ctf/flamba.ctf
|===| Starting
| url: http://localhost:7878
| target: `rsc/dumps/ctf/flamba.ctf`
|===|
</code></pre>
<h2>Basics</h2>
<p>Our running example in this section will be <code>rsc/dumps/mini_ae.ctf</code>:</p>
<pre><code class="language-shell-session">❯ memthol --filter_gen none rsc/dumps/ctf/mini_ae.ctf
|===| Starting
| url: http://localhost:7878
| target: `rsc/dumps/ctf/mini_ae.ctf`
|===|
</code></pre>
<p>Notice the odd <code>--filter_gen none</code> passed to memthol. Ignore it for now, it will be discussed later in this section.</p>
<p>Once memthol is running, <code>http://localhost:7878/</code> (here) will lead you to memthol's BUI, which should look something like this:</p>
<p><img src="https://ocamlpro.github.io/memthol/mini_tutorial/basics_pics/default.png" alt="" /></p>
<p>Click on the orange <strong>everything</strong> tab at the bottom left of the screen.</p>
<p><img src="https://ocamlpro.github.io/memthol/mini_tutorial/basics_pics/three_parts.png" alt="" /></p>
<p>Memthol's interface is split in three parts:</p>
<ul>
<li>the central, main part displays charts. There is only one here, showing the evolution of the program's total memory size over time based on the memory dump.
</li>
<li>the header gives statistics about the memory dump and handles general settings. There is currently only one, the <em>time window</em>.- the footer controls your <em>filters</em> (there is only one here), which we are going to discuss right now.
</li>
</ul>
<h3>Filters</h3>
<p><em>Filters</em> allow to split allocations and display them separately. A filter is essentially a set of allocations. Memthol has two built-in filters. The first one is the <strong>everything</strong> filter. You cannot really do anything with it except for changing its name and color using the filter settings in the footer.</p>
<p><img src="https://ocamlpro.github.io/memthol/mini_tutorial/basics_pics/everything_name_color.png" alt="" /></p>
<p>Notice that when a filter is modified, two buttons appear in the top-left part of the footer. The first reverts the changes while the second one saves them. Let's save these changes.</p>
<p><img src="https://ocamlpro.github.io/memthol/mini_tutorial/basics_pics/everything_saved.png" alt="" /></p>
<p>The <strong>everything</strong> filter always contains all allocations in the memory dump. It cannot be changed besides the cosmetic changes we just did. These changes are reverted in the rest of the section.</p>
<h3>Custom Filters</h3>
<p>Let's create a new filter using the <code>+</code> add button in the top-right part of the footer.</p>
<p><img src="https://ocamlpro.github.io/memthol/mini_tutorial/basics_pics/new_filter.png" alt="" /></p>
<p>Notice that, unlike <strong>everything</strong>, the settings for our new filter have a <strong>Catch allocation if …</strong> (empty) section with a <code>+</code> add button. Let's click on that.</p>
<p><img src="https://ocamlpro.github.io/memthol/mini_tutorial/basics_pics/new_sub_filter.png" alt="" /></p>
<p>This adds a criterion to our filter. Let's modify it so that the our filter catches everything of size greater than zero machine words, rename the filter, and save these changes.</p>
<p><img src="https://ocamlpro.github.io/memthol/mini_tutorial/basics_pics/new_filter_1.png" alt="" /></p>
<p>The tab for our filter now shows <strong>(3)</strong> next to its name, indicating that this filter catches 3 allocations, which is all the allocations of the (tiny) dump.</p>
<p>Now, create a new filter and modify it so that it catches allocations made in file <code>weak.ml</code>. This requires</p>
<ul>
<li>creating a filter,
</li>
<li>adding a criterion to that filter,
</li>
<li>switching it from <code>size</code> to <code>callstack</code>
</li>
<li>removing the trailing <code>**</code> (anything) by erasing it,
</li>
<li>write <code>weak.ml</code> as the last file that should appear in the callstack.>
</li>
</ul>
<p>After saving it, you should get the following.</p>
<p><img src="https://ocamlpro.github.io/memthol/mini_tutorial/basics_pics/new_filter_2.png" alt="" /></p>
<p>Sadly, this filter does not match anything, although some allocations fit this filter. This is because a <strong>custom filter</strong> <code>F</code> “catches" an allocation if</p>
<ul>
<li>all of the criteria of <code>F</code> are true for this allocation, and
</li>
<li>the allocation is not caught by any <strong>custom</strong> filter at the left of <code>F</code> (note that the <strong>everything</strong> filter is not a <strong>custom filter</strong>).
</li>
</ul>
<p>In other words, all allocations go through the list of custom filters from left to right, and are caught by the first filter such that all of its criteria are true for this allocation. As such, it is similar to switch/case and pattern matching.</p>
<p>Let's move our new filter to the left by clicking the left arrow next to it, and save the change.</p>
<p><img src="https://ocamlpro.github.io/memthol/mini_tutorial/basics_pics/new_filter_3.png" alt="" /></p>
<p>Nice.</p>
<p>You can remove a filter by selecting it and clicking the <code>-</code> remove button in the top-right part of the footer, next to the <code>+</code> add filter button. This only works for <strong>custom</strong> filters, you cannot remove built-in filters.</p>
<p>Now, remove the first filter we created (size ≥ 0), which should give you this:</p>
<p><img src="https://ocamlpro.github.io/memthol/mini_tutorial/basics_pics/new_filter_4.png" alt="" /></p>
<p>Out of nowhere, we get the second and last built-in filter: <strong>catch-all</strong>. When some allocations are not caught by any of your filters, they will end up in this filter. <strong>Catch-all</strong> is not visible when it does not catch any allocation, which is why it was (mostly) not visible until now. The filter we wrote previously where catching all the allocations.</p>
<blockquote>
<p>In the switch/case analogy, <strong>catch-all</strong> is the <code>else</code>/<code>default</code> branch. In pattern matching, it would be a trailing wildcard <code>_</code>.</p>
</blockquote>
<p>So, <code>weak.ml</code> only catches one of the three allocations: <strong>catch-all</strong> appears and indicates it matches the remaining two.</p>
<blockquote>
<p>It is also possible to write filter criteria over allocations' callstacks. This is discussed in the <a href="https://ocamlpro.github.io/memthol/mini_tutorial/callstack_filters.html">Callstack Filters Section</a>.</p>
</blockquote>
<h3>Filter Generation</h3>
<p>When we launched this section's running example, we passed <code>--filter_gen none</code> to memthol. This is because, by default, memthol will run <em>automatic filter generation</em> which scans allocations and generates filters. The default (and currently only) one creates one filter per allocation-site file.</p>
<blockquote>
<p>For more details, in particular filter generation customization, run <code>memthol --filter_gen help</code>.</p>
</blockquote>
<p>If we relaunch the example without <code>--filter_gen none</code></p>
<pre><code class="language-shell-session">❯ memthol rsc/dumps/ctf/mini_ae.ctf
|===| Starting
| url: http://localhost:7878
| target: `rsc/dumps/ctf/mini_ae.ctf`
|===|
</code></pre>
<p>we get something like this (actual colors may vary):</p>
<p><img src="https://ocamlpro.github.io/memthol/mini_tutorial/basics_pics/filter_gen.png" alt="" /></p>
<h2>Charts</h2>
<p>This section uses the same running example as the last section.</p>
<pre><code class="language-shell-session">❯ memthol rsc/dumps/ctf/mini_ae.ctf
|===| Starting
| url: http://localhost:7878
| target: `rsc/dumps/ctf/mini_ae.ctf`
|===|
</code></pre>
<h3>Filter Toggling</h3>
<p>The first way to interact with a chart is to (de)activate filters. Each chart has its own filter tabs allowing to toggle filters on/off.</p>
<p>From the initial settings</p>
<p><img src="https://ocamlpro.github.io/memthol/mini_tutorial/charts_pics/init.png" alt="" /></p>
<p>click on all filters but <strong>everything</strong> to toggle them off.</p>
<p><img src="https://ocamlpro.github.io/memthol/mini_tutorial/charts_pics/only_everything.png" alt="" /></p>
<p>Let's create a new chart. The only kind of chart that can be constructed currently is total size over time, so click on <strong>create chart</strong> below our current, lone chart.</p>
<p><img src="https://ocamlpro.github.io/memthol/mini_tutorial/charts_pics/two_charts_1.png" alt="" /></p>
<p>Deactivate <strong>everything</strong> in the second chart.</p>
<p><img src="https://ocamlpro.github.io/memthol/mini_tutorial/charts_pics/two_charts_2.png" alt="" /></p>
<p>Nice. We now have the overall total size over time in the first chart, and the details for each filter in the second one.</p>
<p>Next, notice that both charts have, on the left of their title, a down (first chart) and up (second chart) arrow. This moves the charts up and down.</p>
<p>On the right of the title, we have a settings <code>...</code> buttons which is discussed <a href="https://ocamlpro.github.io/memthol/mini_tutorial/charts.html#chart-settings">below</a>. The next button collapses the chart. If we click on the <em>collapse</em>* button of the first chart, it collapses and the button turns into an <em>expand</em> button.</p>
<p><img src="https://ocamlpro.github.io/memthol/mini_tutorial/charts_pics/collapsed.png" alt="" /></p>
<p>The last button in the chart header removes the chart.</p>
<h3>Chart Settings</h3>
<p>Clicking the settings <code>...</code> button in the header of any chart display its settings. (Clicking on the button again hides them.)</p>
<p><img src="https://ocamlpro.github.io/memthol/mini_tutorial/charts_pics/settings_1.png" alt="" /></p>
<p>Currently, these chart settings only allow to rename the chart and change its <strong>display mode</strong>.</p>
<h4>Display Mode</h4>
<p>In memthol, a chart can be displayed in one of three ways:</p>
<ul>
<li>normal, the one we used so far,
</li>
<li>stacked area, where the values of each filter are displayed on top of each other, and
</li>
<li>stacked area percent, same as stacked area but values are displayed as percents of the total.
</li>
</ul>
<p>Here is the second chart from our example displayed as stacked area for instance:</p>
<p><img src="https://ocamlpro.github.io/memthol/mini_tutorial/charts_pics/settings_stacked.png" alt="" /></p>
<h2>Global Settings</h2>
<p>This section uses the same running example as the last section.</p>
<pre><code class="language-shell-session">❯ memthol rsc/dumps/ctf/mini_ae.ctf
|===| Starting
| url: http://localhost:7878
| target: `rsc/dumps/ctf/mini_ae.ctf`
|===|
</code></pre>
<p>There is currently only one global setting: the <em>time window</em>.</p>
<h3>Time Window</h3>
<p>The <em>time window</em> global setting controls the time interval displayed by all the charts.</p>
<p>In our example,</p>
<p><img src="https://ocamlpro.github.io/memthol/mini_tutorial/global_settings_pics/init.png" alt="" /></p>
<p>not much is happening before (roughly) <code>0.065</code> seconds. Let's have the time window start at that point:</p>
<p><img src="https://ocamlpro.github.io/memthol/mini_tutorial/global_settings_pics/time_window_1.png" alt="" /></p>
<p>Similar to filter edition, we can apply or cancel this change using the two buttons that appeared in the bottom-left corner of the header.</p>
<p>Saving these changes yields</p>
<p><img src="https://ocamlpro.github.io/memthol/mini_tutorial/global_settings_pics/time_window_2.png" alt="" /></p>
<p>Here is the same chart but with the time window upper-bound set at <code>0.074</code>.</p>
<p><img src="https://ocamlpro.github.io/memthol/mini_tutorial/global_settings_pics/time_window_3.png" alt="" /></p>
<h2>Callstack Filters</h2>
<p>Callstack filters are filters operating over allocation properties that are sequences of strings (potentially with some other data). Currently, this means <strong>allocation callstacks</strong>, where the strings are file names with line/column information.</p>
<h3>String Filters</h3>
<p>A string filter can have three shapes: an actual <em>string value</em>, a <em>regex</em>, or a <em>match anything</em> / <em>wildcard</em> filter represented by the string <code>"..."</code>. This wildcard filter is discussed in <a href="https://ocamlpro.github.io/memthol/mini_tutorial/callstack_filters.html#the-wildcard-filter">its own section</a> below.</p>
<p>A string value is simply given as a value. To match precisely the string <code>"file_name"</code>, one only needs to write <code>file_name</code>. So, a filter that matches precisely the list of strings <code>[ "file_name_1", "file_name_2" ]</code> will be written</p>
<table>
<thead>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
</tr>
</thead>
<tbody>
<tr>
<td>string list</td>
<td>contains</td>
<td>`[ file_name_1 file_name_2 ]`</td>
</tr>
</tbody>
</table>
<p>A <em>regex</em> on the other hand has to be written between <code>#"</code> and <code>"#</code>. If we want the same filter as above, but want to relax the first string description to be <code>file_name_<i></code> where <code><i></code> is a single digit, we write the filter as</p>
<table>
<thead>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
</tr>
</thead>
<tbody>
<tr>
<td>string list</td>
<td>contains</td>
<td>`[ #"file_name_[0-9]"# file_name_2 ]`</td>
</tr>
</tbody>
</table>
<h3>The Wildcard Filter</h3>
<p>The wildcard filter, written <code>...</code>, <strong>lazily</strong> (in general, see below) matches a repetition of any string-like element of the list. To break this definition down, let us separate two cases: the first one is when <code>...</code> is not followed by another string-like filter, and second one is when it is followed by another filter.</p>
<p>In the first case, <code>...</code> simply matches everything. Consider for instance the filter</p>
<table>
<thead>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
</tr>
</thead>
<tbody>
<tr>
<td>string list</td>
<td>contain</td>
<td>`[ #"file_name_[0-9]"# ... ]`</td>
</tr>
</tbody>
</table>
<p>This filter matches any list of strings that starts with a string accepted by the first regex filter. The following lists of strings are all accepted by the filter above.</p>
<ul>
<li><code>[ file_name_0 ]</code>
</li>
<li><code>[ file_name_7 anything at all ]</code>
</li>
<li><code>[ file_name_3 file_name_7 ]</code>
</li>
</ul>
<p>Now, there is one case when <code>...</code> is not actually lazy: when the <code>n</code> string-filters <em>after</em> it are not <code>...</code>. In this case, all elements of the list but the <code>n</code> last ones will be skipped, leaving them for the <code>n</code> last string filters.</p>
<p>For this reason</p>
<table>
<thead>
<tr>
<th>
</th>
<th>
</th>
<th>
</th>
</tr>
</thead>
<tbody>
<tr>
<td>string list</td>
<td>contain</td>
<td>`[ … #"file_name_[0-9]"# ]`</td>
</tr>
</tbody>
</table>
<p>does work as expected. For example, on the string list</p>
<pre><code class="language-shell-session">[ "some_file_name" "file_name_7" "another_file_name" "file_name_0" ]
</code></pre>
<p>a lazy behavior would not match. First, <code>...</code> would match anything up to and excluding a string recognized by <code>#"file_name_[0-9]"#</code>. So <code>...</code> would match <code>some_file_name</code>, but that's it since <code>file_name_7</code> is a match for <code>#"file_name_[0-9]"#</code>. Hence the filter would reject this list of strings, because there should be nothing left after the match for <code>#"file_name_[0-9]"#</code>. But there are still <code>another_file_name</code> and <code>file_name_0</code> left.</p>
<p>Instead, the filter works as expected. <code>...</code> discards all elements but the last one <code>file_name_0</code>, which is accepted by <code>#"file_name_[0-9]"#</code>.</p>
<h3>Callstack (Location) Filters</h3>
<p>Allocation callstack information is a list of tuples containing:</p>
<ul>
<li>the name of the file,
</li>
<li>the line in the file,
</li>
<li>a column range.
</li>
</ul>
<p>Currently, the range information is ignored. The line in the file is not, and one can specify a line constraint while writing a callstack filter. The <em>normal</em> syntax is</p>
<pre><code class="language-shell-session"><string-filter>:<line-filter>
</code></pre>
<p>Now, a line filter has two basic shapes</p>
<ul>
<li><code>_</code>: anything,
</li>
<li><code><number></code>: an actual value.
</li>
</ul>
<p>It can also be a range:</p>
<ul>
<li><code>[<basic-line-filter>, <basic-line-filter>]</code>: a potentially open range.
</li>
</ul>
<h4>Line Filter Examples</h4>
<table>
<thead>
<tr>
<th>
</th>
<th>
</th>
</tr>
</thead>
<tbody>
<tr>
<td>`_`</td>
<td>matches any line at all</td>
</tr>
<tr>
<td>`7`</td>
<td>matches line 7</td>
</tr>
<tr>
<td>`[50, 102]`</td>
<td>matches any line between `50` and `102`</td>
</tr>
<tr>
<td>`[50, _]`</td>
<td>matches any line greater than `50`</td>
</tr>
<tr>
<td>`[_, 102]`</td>
<td>matches any line less than `102`</td>
</tr>
<tr>
<td>`[_, _]`</td>
<td>same as `_` (matches any line)</td>
</tr>
</tbody>
</table>
<h4>Callstack Filter Examples</h4>
<p>Whitespaces are inserted for readability but are not needed:</p>
<table>
<thead>
<tr>
<th>
</th>
<th>
</th>
</tr>
</thead>
<tbody>
<tr>
<td>`src/main.ml : _`</td>
<td>matches any line of `src/main.ml`</td>
</tr>
<tr>
<td>`#".*/main.ml"# : 107`</td>
<td>matches line 107 of any `main.ml` file regardless of its path</td>
</tr>
</tbody>
</table>
Rehabilitating Packs using Functors and Recursivity, part 2.https://ocamlpro.com/blog/2020_09_30_rehabilitating_packs_using_functors_and_recursivity_part_22020-09-30T08:12:13Z2020-09-30T08:12:13Z
Pierrick Couderc
This blog post and the previous one about functor packs covers two RFCs currently developed by OCamlPro and Jane Street. We previously introduced functor packs, a new feature adding the possiblity to compile packs as functors, allowing the user to implement functors as multiple source files or even ...<p><img src="/blog/assets/img/train.jpg" alt="" /></p>
<p>This blog post and the previous one about <a href="/blog/2020_09_24_rehabilitating_packs_using_functors_and_recursivity_part_1">functor packs</a> covers two RFCs currently developed by OCamlPro and Jane Street. We previously introduced functor packs, a new feature adding the possiblity to compile packs as functors, allowing the user to implement functors as multiple source files or even parameterized libraries.</p>
<p>In this blog post, we will cover the other aspect of the packs rehabilitation: allowing anyone to implement recursive compilation units using packs (as described formally in the <a href="https://github.com/ocaml/RFCs/pull/20">RFC#20</a>). Our previous post introduced briefly how packs were compiled and why we needed some bits of closure conversion to effectively implement big functors. Once again, to implement recursive packs we will need to encode modules through this technique, as such we advise the reader to check at least the introduction and the compilation part of functor packs.</p>
<h2>Recursive modules through recursive packs</h2>
<p>Recursive modules are a feature long available in the compiler, but restricted to modules, not compilation units. As such, it is impossible to write two files that depend on each other, except by using scripts that tie up these modules into a single compilation file. Due to the internal representation of recursive modules, it would be difficult to implement recursive (and mutually recursive) compilation units. However, we could use packs to implement these.</p>
<p>One common example of recursive modules are trees whose nodes are represented by sets. To implement such a data structure with the standard library we need recursive modules: <code>Set</code> is a functor that takes as parameter a module describing the values embedded in the set, but in our case the type needs the already applied functor.</p>
<pre><code class="language-Ocaml">module rec T : sig
type t =
Leaf of int
| Node of TSet.t
val compare : t -> t -> int
end = struct
type t =
Leaf of int
| Node of TSet.t
let compare t1 t2 =
match t1, t2 with
Leaf v1, Leaf v2 -> Int.compare v1 v2
| Node s1, Node s2 -> TSet.compare s1 s2
| Leaf _, Node _ -> -1
| Node _, Leaf _ -> 1
end
and TSet : Set.S with type elt = T.t = Set.Make(T)
</code></pre>
<p>With recursive pack, we can simply put <code>T</code> and <code>TSet</code> into their respective files (<code>t.ml</code> and <code>tSet.ml</code>), and tie them into one module (let's name it <code>P</code>). Signature of recursive modules cannot be infered, as such we also need to define <code>t.mli</code> and <code>tSet.mli</code>. Both must be compiled simultaneously since they refer to each other. The result of the compilation is the following:</p>
<pre><code class="language-shell-session">ocamlopt -c -for-pack P -recursive t.mli tSet.mli
ocamlopt -c -for-pack P -pack-is-recursive P t.ml
ocamlopt -c -for-pack P -pack-is-recursive P tSet.ml
ocamlopt -o p.cmx -recursive-pack t.cmx tSet.cmx
</code></pre>
<p>We have three new compilation options:</p>
<ul>
<li><code>-recursive</code> indicates to the compiler to typecheck all the given <code>mli</code>s simultaneously, as recursive modules.
</li>
<li><code>-pack-is-recursive</code> indicates which pack(s) in the hierarchy are meant to be recursive. This is necessary since it determines how the module must be compiled (<em>i.e</em> if we will need to apply closure conversion).
</li>
<li><code>recursive-pack</code> generates a pack that deals with the initialization of its modules, as for recursive modules.
</li>
</ul>
<h3>Recursives modules compilation</h3>
<p>One may be wondering why we need packs to compile recursive modules. Let's take a look at how they are encoded. We will craft a naive example that is simple enough once compiled:</p>
<pre><code class="language-Ocaml">module rec Even : sig
val test: int -> bool
end = struct
let test i =
if i-1 <= 0 then false else Odd.test (i-1)
end
and Odd : sig
val test: int -> bool
end = struct
let test i =
if i-1 <= 0 then true else Even.test (i-1)
end
</code></pre>
<p>It defines two modules <code>Even</code> and <code>Odd</code>, that both test whether an integer is even or odd, and if that is not the case calls the test function from the other module. Not a really interesting use of recursive modules obviously. The compilation schema for recursive modules is the following:</p>
<ul>
<li>First, it allocates empty blocks for each module according to its <strong>shape</strong> (how many values are bound and what size they need in the block, if the module is a functor and what are its values, etc).
</li>
<li>Then these blocks are filled with the implementation.
</li>
</ul>
<p>In our case, in a pseudo-code that is a bit higher order than Lambda (the first intermediate language of ocaml) it would translate as:</p>
<pre><code class="language-Ocaml">module Even = <allocation of the shape of even.cmx>
module Odd = <allocation of the shape of odd.cmx>
Even := <struct let test = .. end>
Odd := <struct let test = .. end>
</code></pre>
<p>This ensures that every reference to <code>Even</code> in <code>Odd</code> (and vice-versa) are valid pointers. To respect this schema, we will use packs to tie recursive modules together. Without packs, this means we would generate this code when linking the units into an executable which can be tricky. The pack can simply do it as initialization code.</p>
<h3>Compiling modules for recursive pack</h3>
<p>If we tried to compile these modules naively, we would end up in the same situation than for the functor pack: the compilation units would refer to identifiers that do not exist at the time they are generated. Moreover, the initialization part needs to know the shape of the compilation unit to be able to allocate precisely the block that will contain the recursive module. In order to implement recursive compilation units into packs, we extends the compilation units in two ways:</p>
<ul>
<li>The shape of the unit is computed and stored in the <code>cmo</code> (or <code>cmx</code>).
</li>
<li>As for functor pack, we apply closure conversion on the free variables that are modules from the same pack or from packs above in the hierarchy as long as they are recursive.
</li>
</ul>
<p>As an example, we will reuse our <code>Even</code> / <code>Odd</code> example above and split it into two units <code>even.ml</code> and <code>odd.ml</code>, and compile them into a recursive pack <code>P</code>. Both have the same shape: a module with a single value. <code>Even</code> refers to a free variable <code>Odd</code>, which is in the same recursive pack, and vice-versa. The result of the closure conversion is a function that will take the pointer resulting from the initialization. Since the module is also recursive itself, it takes its own pointer resulting from its initialization. The result will look as something like:</p>
<pre><code class="language-Ocaml">(* even.cmx *)
module Even_rec (Even: <even.mli><even.mli>)(Odd: <odd.mli><odd.mli>) = ..
(* odd.cmx *)
module Odd_rec (Odd: <odd.mli><odd.mli>)(Even: <even.mli><even.mli>) = ..
(* p.cmx *)
module Even = <allocation of the shape of even.cmx>
module Odd = <allocation of the shape of odd.cmx>
Even := Even_rec(Even)(Odd)
Odd := Odd_rec(Odd)(Even)
</code></pre>
<h2>Rejunavating packs</h2>
<p>Under the hood, these new features come with some refactoring in the pack implementation which follows work done for RFC on the <a href="https://github.com/ocaml/RFCs/pull/13">representation of symbols</a> in the middle-end of the compiler. Packs were not really used anymore and were deprecated by module aliases, this work makes them relevant again. These RFCs improve the OCaml ecosystem in multiple ways:</p>
<ul>
<li>Compilation units are now on par with modules, since they can be functors.
</li>
<li>Functor packs allow developers to implement parameterized libraries, without having to rely on scripts to produce multiple libraries linked with different <em>backends</em> (for example, Cohttp can use Lwt or Async as backend, and provides two libraries, one for each of these).
</li>
<li>Recursive packs allow the implementation of recursive modules into separate files.
</li>
</ul>
<p>We hope that such improvements will benefit the users and library developers. Having a way to implement parameterize libraries without having to describe big functors by hand, or use mutually recursive compilation units without using scripts to generate a unique <code>ml</code> file will certainly introduce new workflows.</p>
Rehabilitating Packs using Functors and Recursivity, part 1.https://ocamlpro.com/blog/2020_09_24_rehabilitating_packs_using_functors_and_recursivity_part_12020-09-24T08:12:13Z2020-09-24T08:12:13Z
Pierrick Couderc
OCamlPro has a long history of dedicated efforts to support the development of the OCaml compiler, through sponsorship or direct contributions from Flambda Team. An important one is the Flambda intermediate representation designed for optimizations, and in the future its next iteration Flambda 2. Th...<p><img src="/blog/assets/img/train.jpg" alt="" /></p>
<p>OCamlPro has a long history of dedicated efforts to support the development of the OCaml compiler, through sponsorship or direct contributions from <em>Flambda Team</em>. An important one is the Flambda intermediate representation designed for optimizations, and in the future its next iteration Flambda 2. This work is funded by JaneStreet.</p>
<p>Packs in the OCaml ecosystem are kind of an outdated concept (options <code>-pack</code> and <code>-for-pack</code> in the <a href="https://caml.inria.fr/pub/docs/manual-ocaml/comp.html">OCaml manual</a>), and their main utility has been overtaken by the introduction of <a href="https://caml.inria.fr/pub/docs/manual-ocaml/modulealias.html">module aliases</a> in OCaml 4.02. What if we tried to redeem them and give them a new youth and utility by adding the possibility to generate functors or recursive packs?</p>
<p>This blog post covers the <a href="https://github.com/ocaml/RFCs/pull/11">functor units and functor packs</a>, while the next one will be centered around <a href="https://github.com/ocaml/RFCs/pull/20">recursive packs</a>. Both RFCs are currently developed by JaneStreet and OCamlPro. This idea was initially introduced by <a href="/blog/2011_08_10_packing_and_functors">functor packs</a> (Fabrice Le Fessant) and later generalized by <a href="https://ocaml.org/meetings/ocaml/2014/ocaml2014_8.pdf">functorized namespaces</a> (Pierrick Couderc et al.).</p>
<h2>Packs for the masses</h2>
<p>First of all let's take a look at what packs are, and how they fixed some issues that arose when the ecosystem started to grow and the number of libraries got quite large.</p>
<p>One common problem in any programming language is how names are treated and disambiguated. For example, look at this small piece of code:</p>
<pre><code class="language-Ocaml">let x = "something"
let x = "something else"
</code></pre>
<p>We declare two variables <code>x</code>, but actually the first one is shadowed by the second, and is now unavailable for the rest of the program. It is perfectly valid in OCaml. Let's try to do the same thing with modules:</p>
<pre><code class="language-Ocaml">module M = struct end
module M = struct end
</code></pre>
<p>The compiler rejects it with the following error:</p>
<pre><code class="language-shell-session">File "m.ml", line 3, characters 0-21:
3 | module M = struct end
^^^^^^^^^^^^^^^^^^^^^
Error: Multiple definition of the module name M.
Names must be unique in a given structure or signature.
</code></pre>
<p>This also applies with programs linking two compilation units of the same name. Imagine you are using two libraries (here <code>lib_a</code> and <code>lib_b</code>), that both define a module named <code>Misc</code>.</p>
<pre><code class="language-shell-session">ocamlopt -o prog.asm -I lib_a -I lib_b lib_a.cmxa lib_b.cmxa prog.ml
File "prog.ml", line 1:
Error: The files lib_a/a.cmi and lib_b/b.cmi make inconsistent assumptions
over interface Misc
</code></pre>
<p>At link time, the compiler will reject your program since you are trying to link two modules with the same name but different implementations. The compiler is unable to differentiate the two compilation units since they define some identical symbols, as such cannot link the program. Enforcing unique module names in the same namespace (<em>i.e.</em> a signature) is consistent with the inability to link two modules of the same name in the same program.</p>
<p>However, <code>Misc</code> is a common name for a module in any project. How can we avoid that? As a user of the libraries there is nothing you can do, since you cannot rename the modules (you will eventually need to link two files named <code>misc.cmx</code>). As the developer, you need to ensure that your module names are unique enough to be used along any other libraries. One solution would be to use prefixes for each of your compilation units, for example by naming your files <code>mylib_misc.ml</code>, with the drawback that you will need to use those long module names inside your library. Another solution is packing your units.</p>
<p>A pack is simply a generated module that appends all your compilation units into one. For example, suppose you have two files <code>a.ml</code> and <code>b.ml</code>, you can generate a pack (<em>i.e.</em> a module) <code>mylib.cmx</code> that is equivalent to:</p>
<pre><code class="language-Ocaml">module A = struct <content of a.ml> end
module B = struct <content of b.ml> end
</code></pre>
<p>As such, <code>A</code> and <code>B</code> can retain their original module name, and be accessed from the outside as <code>Mylib.A</code> and <code>Mylib.B</code>. It uses the namespacing induced by the module system. A developer can simply generate a pack for its library, assuming its library name will be unique enough to be linked with other modules without the risk of name clashing. However it has one big downside: suppose you use a library with many modules but only use one. Without packs the compiler will only link the necessary compilation units from this library, but since the pack is one big compilation unit this means your program embeds the complete library.</p>
<p>This problem is fixed using module aliases and the compiler option <code>-no-alias-deps</code> since OCaml 4.02, and the result for the user is equivalent to a pack, making them more or less deprecated.</p>
<h2>Functorizing packs, or how to parameterize a library</h2>
<p>Packs being modules representing libraries, a useful feature would be to be able to produce libraries that take modules as parameters, just like functors. Another usage would be to split a huge functor into multiple files. In other words, we want our pack <code>Mylib</code> to be compiled as:</p>
<pre><code class="language-Ocaml">functor (P : sig .. end) -> struct
module A = struct <content of a.ml> end
module B = struct <content of b.ml> end
end
</code></pre>
<p>while <code>A</code> and <code>B</code> would use the parameter <code>P</code> as a module, and <code>Mylib</code> instantiated later as</p>
<pre><code class="language-Ocaml">module Mylib = Mylib(Some_module_with_sig_compatible_with_P)
</code></pre>
<p>One can notice that our pack is indeed a functor, and not simply a module that binds a functor. To be able to do that, we also extends classical compilation units to be compiled as functors. Such functors are not expressed in the language, we do not provide a syntax for that, they are a matter of options at compile-time. For example:</p>
<pre><code class="language-shell-session">ocamlopt -c -parameter P m.ml
</code></pre>
<p>will compile <code>m.ml</code> as a functor that has a parameter <code>P</code> whose interface is described in <code>p.cmi</code> in the compilation path. Similarly, our pack <code>Mylib</code> can be produced by the following compilation steps:</p>
<pre><code class="language-shell-session">ocamlopt -c -parameter-of Mylib p.mli
ocamlopt -c -for-pack "Mylib(P)" a.ml
ocamlopt -c -for-pack "MyLib(P)" b.ml
ocamlopt -pack -o mylib.cmx -parameter P a.cmx b.cmx
</code></pre>
<p>In details:</p>
<ul>
<li>The parameter is compiled with the flag <code>-parameter-of Mylib</code>, as such it won't be used as the interface of an implementation.
</li>
<li>The two modules packed are compiled with the flag <code>-for-pack "MyLib(P)"</code>. Expressing the parameter of the pack is mandatory since <code>P</code> must be known as a functor parameter (we will see why in the next section).
</li>
<li>The pack is compiled with <code>-parameter P</code>, which will indeed produce a functorized compilation unit.
</li>
</ul>
<p>Functors are not limited to a unique parameter, as such they can be compiled with multiple <code>-parameter</code> options and multiple arguments in <code>-for-pack</code>. This implementation being on the build system side, it does not need to change the syntax of the language. We expect build tools like dune to provide supports for this feature, making it maybe more easier to use. Moreover, it makes compilation units on par with modules which can have a functor type. One downside however is that we cannot express type equalities between two parameters or with the functor body type as we would do with substitutions in module types.</p>
<h3>Functor packs under the hood</h3>
<p>In terms of implementation, packs should be seen as a concatenation of the compilation units then a rebinding of each of them in the newly created one. For example, a pack <code>P</code> of two units <code>m.cmx</code> and <code>n.cmx</code> is actually compiled as something like:</p>
<pre><code class="language-Ocaml">module P__M = <code of m.cmx>
module P__N = <code of n.cmx>
module M = P__M
module N = P__N
</code></pre>
<p>According to this representation, if we tried to naively implement our previous functor pack <code>Mylib(P)</code> we would end up with a functor looking like this:</p>
<pre><code class="language-Ocaml">module Mylib__A = <code of a.cmx, with references to P>
module Mylib__B = <code of b.cmx, with references to P>
functor (P : <signature of p.cmi>) -> struct
module A = Mylib__A
module B = Mylib__B
end
</code></pre>
<p>Unfortunately, this encoding of functor packs is wrong: <code>P</code> is free in <code>a.cmx</code> and <code>b.cmx</code> and its identifier cannot correspond to the one generated for the functor retrospectively. The solution is actually quite simple and relies on a transformation known as <strong><a href="https://en.wikipedia.org/wiki/Lambda_lifting">closure conversion</a></strong>. In other words we will transform our modules into functors that takes as parameters their free variables, which in our case are the parameters of the functor pack and the dependencies from the same pack. Let's do it on a concrete functor equivalent to Mylib:</p>
<pre><code class="language-Ocaml">module Mylib' (P : P_SIG) = struct
module A = struct .. <references to P> end
module B = struct .. <references to P> <references to A> end
end
</code></pre>
<p>Our goal here is to move <code>A</code> and <code>B</code> outside of the functor, as such out of the scope of <code>P</code>, which is done by transforming those two modules into functors that takes a parameter <code>P'</code> with the same signature as <code>P</code>:</p>
<pre><code class="language-Ocaml">module A_funct (P' : P_SIG) = struct .. <references to P as P'> end
module B_funct (P' : P_SIG) = struct
module A' = A_funct(P')
..
<references to P as P'>
<references to A as A'>
end
module Mylib' (P : P_SIG) = struct
module A = A_funct(P)
module B = B_funct(P)
end
</code></pre>
<p>While this code compiles it is not semantically equivalent. <code>A_funct</code> is instantiated twice, its side effects are computed twice: the first time when instantiating <code>A</code> in the functor, and the second when instantiating <code>B</code>. The solution is simply to go further with closure conversion and make the result of applying <code>A_funct</code> to <code>P</code> an argument of <code>B_funct</code>.</p>
<pre><code class="language-Ocaml">module A_funct (P' : P_SIG) = struct .. <references to P as P'> end
module B_funct (P' : P_SIG)(A': module type of A_funct(P'))= struct
..
<references to P as P'>
<references to A as A'>
end
module Mylib' (P : P_SIG) = struct
module A = A_funct(P)
module B = B_funct(P)(A)
end
</code></pre>
<p>This represents exactly how our functor pack <code>Mylib</code> is encoded. Since we need to compile modules in a specific way if they belong to a functor pack, the compiler has to know in the argument <code>-for-pack</code> that the pack is a functor, and what are its parameters.</p>
<h3>Functor packs applied to <code>ocamlopt</code></h3>
<p>What we described is a functional prototype of functor packs, implemented on OCaml 4.10, as described in <a href="https://github.com/ocaml/RFCs/pull/11">RFC#11</a>. In practice, we already have one usage that we could benefit of in the future: cross-compilation of native code. At the moment the compiler is configured to target the architecture which it is compiled on. The modules relative to the current architecture are linked symbolically into the backend folder and the backend is compiled as if it only supported one architecture. One downside of this approach is that changes into the interface of the backend that need some modifications in each architecture are not detected at compile time, but only for the current one. You need to reconfigure the OCaml compiler and rebuild it to check if another architecture still compiles. One interesting property is that each architecture backend defines the same set of modules with compatible interfaces. In other words, these modules could simply be parameters of a functor, that is instantiated for a given architecture.</p>
<p>Following this idea, we implemented a prototype of native compiler whose backend is indeed packed into a functor, and instantiated at the initialization of the compiler. With this approach, we can easily switch the targeted architecture, and moreover we can be sure that each architecture is compiled, leveraging the fact that some necessary refactoring is always done when changes happen in the backend interface. Implementing such a functor is mainly a matter of adapting the build system to produce a functor pack, writing few signatures for the functor and its parameters, and instantiating the backend at the right time.</p>
<p>This proof of concept shows how functor packs can ease some complicated build system and allows new workflow.</p>
<h2>Making packs useful again</h2>
<p>Packs were an old concept mainly outdated by module aliases. They were not practical as they are some sort of monolithic libraries shaped into a unique module containing sub modules. While they perfectly use the module system for its namespacing properties, their usage enforces the compiler to link an entire library even if only one module is actually used. This improvement allows programmers to define big functors, functors that are split among multiple files, resulting in what we can view as a way to implement some form of parameterized libraries.</p>
<p>In the second part, we will cover another aspect of the rehabilitation of packs: using packs to implement mutually recursive compilation units.</p>
<h1>Comments</h1>
<p>François Bobot (25 September 2020 at 9 h 16 min):</p>
<blockquote>
<p>I believe there is a typo</p>
</blockquote>
<pre><code class="language-ocaml">module Mylib’ (P : P_SIG) = struct
module A = A_funct(P)
module B = A_funct(P)
end
</code></pre>
<blockquote>
<p>The last must be <code>B_funct(P)</code>, the next example as also the same typo.</p>
</blockquote>
<p>Pierrick Couderc (25 September 2020 at 10 h 31 min):</p>
<blockquote>
<p>Indeed, thank you!</p>
</blockquote>
<p>Cyrus Omar (8 February 2021 at 3 h 49 min):</p>
<blockquote>
<p>This looks very useful! Any updates on this work? I’d like to be able to use it from dune.</p>
</blockquote>
A Dune Love story: From Liquidity to Lovehttps://ocamlpro.com/blog/2020_06_09_a_dune_love_story_from_liquidity_to_love2020-06-09T08:12:13Z2020-06-09T08:12:13Z
Steven De Oliveira
By OCamlPro & Origin Labs Writing smart contacts may often be a burdensome task, as you need to learn a new language for each blockchain you target. In the Dune Network team, we are willing to provide as many possibilities as possible for developers to thrive in an accessible and secure framework. T...<div align="center">
<a href="/blog/2020_06_09_a_dune_love_story_from_liquidity_to_love">
<img width="900" height="900" alt="Liquidity & Love" title="A Dune Love story: From Liquidity to Love" src="/blog/assets/img/liq-love-1.png">
</a>
</div>
<p><em>By OCamlPro & Origin Labs</em></p>
<p>Writing smart contacts may often be a burdensome task, as you need to learn a new language for each blockchain you target. In the Dune Network team, we are willing to provide as many possibilities as possible for developers to thrive in an accessible and secure framework.</p>
<p>There are two kinds of languages on a blockchain: “native” languages that are directly understood by the blockchain, but with some difficulty by the developers, and “compiled” languages that are more transparent to developers, but need to be translated to a native language to run on the blockchain. For example, Solidity is a developer-friendly language, compiled to the native EVM language on the Ethereum blockchain.</p>
<p>Dune Network supports multiple native languages:</p>
<ul>
<li><a href="https://medium.com/dune-network/love-a-new-smart-contract-language-for-the-dune-network-a217ab2255be"><strong>Love</strong></a>, a type-safe language with a ML syntax and suited for formal verification
</li>
<li><a href="https://dune.network/docs/dune-node-mainnet/whitedoc/michelson.html"><strong>Michelson</strong></a>, inherited from <a href="https://tezos.com">Tezos</a>, also type-safe, much more difficult to read
</li>
<li><a href="https://en.wikipedia.org/wiki/Solidity"><strong>Solidity</strong></a>, the Ethereum language, of which we are currently implementing the interpreter after releasing <a href="https://medium.com/dune-network/a-solidity-parser-in-ocaml-with-menhir-e1064f94e76b">its parser in OCaml</a> a few weeks ago
</li>
</ul>
<p>On the side of compiled languages, Dune Network supports:</p>
<ul>
<li><a href="https://www.liquidity-lang.org/"><strong>Liquidity</strong></a>, a type-safe ML language suited for formal verification, that compiles to Michelson (and allows developers to decompile Michelson for auditing)
</li>
<li><a href="https://reasonml.github.io/"><strong>ReasonML</strong></a>, a JavaScript language designed by Facebook that compiles down to Michelson through Liquidity
</li>
<li>All other Tezos languages that compile to Michelson (for example <a href="https://ligolang.org/"><strong>Ligo</strong></a>, <a href="https://smartpy.io/"><strong>SmartPy</strong></a>, <a href="https://albert-lang.io/"><strong>Albert</strong></a>...)
</li>
</ul>
<p>Though Liquidity and Love are both part of the ML family, Liquidity is much more developer-friendly: types are inferred, whereas in Love they have to be explicit, and Liquidity supports the ReasonML JavaScript syntax while Love is bound to its ML syntax.</p>
<p>For all these reasons, we are pleased to announce a wedding: Liquidity now supports the Love language!</p>
<p><img src="/blog/assets/img/liq-love-2.png" alt="Liquidity & Love" /></p>
<p><em>Liquidity now supports generating Love smart contracts</em></p>
<p>This is great news for Love, as Liquidity is easier to use, and comes with an online web editor, <a href="https://www.liquidity-lang.org/edit/">Try-Liquidity</a>. Liquidity is also being targeted by the <a href="https://arxiv.org/pdf/1907.10674.pdf">ConCert project</a>, aiming at <strong>verifying smart contracts</strong> with the formal verification framework Coq.</p>
<p><img src="/blog/assets/img/dune-compilers.png" alt="Dune Languages" /></p>
<p><em>The Smart Contract Framework on the Dune Network</em></p>
<p>Compiling contracts from Liquidity to Love has several benefits compared to Michelson. First, Love contracts are about 60% smaller than Michelson contracts, hence they are <strong>60% cheaper</strong> to deploy. Also, the compiler outputs a Love contract that can be easily read and audited.</p>
<p>The Love compiler is part of the <a href="https://github.com/OCamlPro/liquidity">Liquidity project</a>. It works as follows:</p>
<ul>
<li><strong>The Liquidity contract is type-checked by the Liquidity compiler.</strong> The strong type system of liquidity enforces structural & semantic properties on data.
</li>
<li><strong>The typed Liquidity contract is compiled to a typed Love contract.</strong> During this step, the Liquidity contract is scanned to check if it complies with the Love requirements (correct use of operators, no reentrancy, etc.).
</li>
<li><strong>The Love contract is type-checked.</strong> Once this step is completed, the contract is ready to be deployed on the chain!
</li>
</ul>
<p>Want to try it out? Check the <a href="https://www.liquidity-lang.org/edit/">Try-Liquidity</a> website: you can now compile and deploy your Liquidity contracts in Love from the online editor directly to the Mainnet and Testnet using <a href="https://metal.dune.network">Dune Metal</a>!</p>
<hr />
<p>These are some of the resources you might find interesting when building your own smart contracts:</p>
<ul>
<li><strong>The Love Language Documentation</strong>: https://dune.network/docs/dune-dev-docs/love-doc/introduction.html
</li>
<li><strong>Try-Liquidity:</strong> https://www.liquidity-lang.org/edit/
</li>
<li><strong>The Liquidity Website:</strong> https://www.liquidity-lang.org/
</li>
<li><strong>The Dune Network Website:</strong> https://dune.network
</li>
</ul>
<h2>About Origin Labs</h2>
<p>Origin Labs is a company founded in 2019 by the former blockchain team at OCamlPro. At Origin Labs, they have been developing Dune Network, a fork of the Tezos blockchain, its ecosystem, and applications over the Dune Network platform. At OCamlPro, they developed TzScan, the most popular block explorer at the time, Liquidity, a smart contract language, and were involved in the development of the core protocol and node. Feel free to reach out by email: contact@origin-labs.com.</p>
[Interview] Sylvain Conchon joins OCamlPro https://ocamlpro.com/blog/2020_06_06_interview_sylvain_conchon_joins_ocamlpro2020-06-06T08:12:13Z2020-06-06T08:12:13Z
Aurore Dombry
On April 2020, Sylvain Conchon joined the OCamlPro team as our Chief Scientific Officer on Formal Methods. Sylvain is a professor at University Paris-Saclay, he has also been teaching OCaml in universities for about 20 years. He is the co-author of Apprendre à programmer avec OCaml with Jean-Christ...<p><img src="/blog/assets/img/picture_sylvainconchon.jpg" alt="" /></p>
<p><strong>On April 2020, <a href="https://www.lri.fr/~conchon/">Sylvain Conchon</a> joined the OCamlPro team as our Chief Scientific Officer on Formal Methods</strong>. Sylvain is a professor at University Paris-Saclay, he has also been teaching OCaml in universities for about 20 years. He is the co-author of <em><a href="https://www.eyrolles.com/Informatique/Livre/apprendre-a-programmer-avec-ocaml-9782212136784/">Apprendre à programmer avec OCaml</a></em> with Jean-Christophe Filliâtre, a book for students in French elitist Preparatory Schools. His field of expertise is the automated deduction for program verification and model checking of parameterized systems. He is also the co-creator of <a href="https://alt-ergo.ocamlpro.com">Alt-Ergo</a>, our <a href="https://en.wikipedia.org/wiki/Satisfiability_modulo_theories">SMT Solver</a> dedicated to program verification, used by Airbus and qualified for the [DO-178C](http://(https://en.wikipedia.org/wiki/DO-178C) avionic standard, of <a href="http://cubicle.lri.fr/">Cubicle</a> and the very useful <a href="https://opam.ocaml.org/packages/ocamlgraph/">OCamlgraph</a> library.</p>
<h4>Research and Industry</h4>
<h4>Sylvain, you’ve been involved in the industrial world for a long time, what do you think about the interactions between industry and research labs?</h4>
<p>I’ve always found interactions with industry professionals to be very rewarding. During my studies, I worked for several years in IT (SSII), and as a university professor, I have supervised students during their internships or apprenticeships in tech companies or at large industrial companies every year. I also take part in research projects that involve industrial partners, and I spent some time at Intel in Portland, which allowed me to discover the computer hardware industry from inside.</p>
<h4>How do you establish a fruitful collaboration between academia and industry?</h4>
<p>It’s primarily a question of mutual understanding. You can see it clearly during collaborative research projects that involve both academics and industrial partners. Tools resulting from research, no matter what they are, have to be relevant to real industrial problems. Once that’s taken care of, the software also needs to be usable by industry professionals without them needing to understand its inner workings (for instance they shouldn’t have to specify all 50 necessary options for its use, interpret its results, or its absence of results!).</p>
<p>This requires a significant engineering effort geared towards the end user; and this task is not part of usual research activity. So, we first need to really understand the problems and needs of the industrial partner, and then determine whether our technologies and tools can be adapted or used to prototype a relevant solution.</p>
<h4>You’ve just joined OCamlPro, what are your first thoughts?</h4>
<p>I am very happy to be joining such a dynamic company full of talented, motivated, friendly people, where they do both high-level engineering and top-quality research! Several of my former PhD students are also working at OCamlPro, such as Albin Coquereau, David Declerck and Mattias Roux. With Mohamed Iguernlala and Alain Mebsout at our partner Origin Labs, and with the other OCP team members, it makes our team rock-solid in formal methods tooling development.</p>
<blockquote>
<p><em>“Tools resulting from research, no matter what they are, have to satisfy real industry needs.”</em></p>
</blockquote>
<h4>OCaml, a Cutting-Edge Language</h4>
<h4>You are well known in the OCaml community, and some of your students became fans of OCaml (and of your teaching)… What do you say to your students who are just discovering OCaml?</h4>
<p>I tend to summarize it with one phrase: “With OCaml, you’re not learning the computer programming of the last 10 years, you’re learning the programming of <em>the 10 coming years</em>”. This has proven true numerous times, because a good number of OCaml’s features were to be found in mainstream languages years later. That being said, all my years of teaching this language have led me to think that some modifications to its syntax would make the language easier to tackle for some beginners.</p>
<h4>How did you personally discover OCaml?</h4>
<p>During my master’s thesis <em>(maîtrise)</em> at university: one of my teachers pointed this language to me; they believed it would help me write a compiler for another programming language. So, I discovered OCaml by myself, by reading the manual and going through examples. It wasn’t until my MASt <em>(DEA)</em> that I discovered the theoretical foundations of this fantastic language (semantics, typing, compilation).</p>
<h4>Would you say OCaml is an industrial programming language?</h4>
<p>The question needs to be clarified: what <em>is</em> an industrial programming language? If by industrial language you mean one that is used by industry professionals, then I’d say that OCaml needs to be used more widely to be classified as such. If the question is whether OCaml is at the same level as languages used in industry, then it <em>absolutely</em> is. But maybe the question is more about the OCaml ecosystem and how developed the available tooling is: certain improvements undoubtedly need to be made in order to reach the level of a widespread industrial programming language. But we’re on the right track, especially thanks to companies like OCamlPro and its projects like <a href="https://opam.org">Opam</a> and <a href="https://try.ocamlpro.com">Try-OCaml</a> for example.</p>
<h4>Formal Methods as an Industrial Technique, and the Example of the Alt-Ergo Solver</h4>
<h4>Formal methods being one of OCamlPro’s areas of expertise, in what way do you think OCaml is suited for the SMT domain?</h4>
<p>Tools like SMT solvers are mainly symbolic data manipulation software that allow you to analyze, transform, and reason about logical formulas. OCaml is made for that. There is also a more “computational” side to these tools, which requires precise programming of data structures as well as efficient memory management. OCaml, with its extremely <a href="/blog/2020_03_23_in_depth_look_at_best_fit_gc">efficient garbage collector</a> (GC), is particularly suited for this kind of development. SMT solvers are tools that also need to be very reliable because errors are difficult to find and are potentially very harmful. OCaml’s type system contributes to the reliability of these tools.</p>
<blockquote>
<p>“<em>SMT solvers are nowadays essential in software engineering</em>”</p>
</blockquote>
<h4>Can you describe Alt-Ergo in a few words?</h4>
<p>Alt-Ergo is a software for proving logical formulas automatically (without human intervention), meaning proving whether a formula is true or false. Alt-Ergo belongs to a family of automated provers called SMT (Satisfiability Modulo Theories). It was designed to be integrated into program verification platforms. These platforms (like [Why3](https://(https://why3.lri.fr/), <a href="https://frama-c.com/">Frama-C</a>, <a href="https://www.adacore.com/about-spark">Spark</a>…) generate logical formulas that need to be proven in order to guarantee that a program is safe. Proving these formulas by hand would be very tedious (there are sometimes tens of thousands of formulas to prove). An SMT solver such as Alt-Ergo is there to do that job in a completely automated way. It is what allows these verification platforms to be used at an industrial level.</p>
<h4>In what way developing this software in OCaml benefits Alt-Ergo over its competitors?</h4>
<p>It makes it more reliable, since an SMT solver, like any program, can have bugs. Most of Alt-Ergo is written in a purely functional programming style, i.e. only using immutable data structures. One of the advantages of this programming style is that it allowed us to formally prove the main components of Alt-Ergo (for example, its kernel was formalized using the Coq proof assistant, which would have been impossible with a language like C++) without sacrificing efficiency thanks to a very good garbage collector and OCaml’s very powerful persistent data structure library. We made use of OCaml’s module system, particularly functors and recursive modules, to conceive a very modular code, making it maintainable and easily extensible. OCaml allowed us to create <a href="/blog/2019_07_09_alt_ergo_participation_to_the_smt_comp_2019">an SMT solver just as efficient as CVC4 or Z3 for program verification</a>, but with a total number of lines of code divided by three or four.This obviously does not guarantee that Alt-Ergo has zero bugs, but it really helps us in fixing any if they are found.</p>
<h4>What is your opinion on SMT solvers and the current state of the art of SMT?</h4>
<p>Today, SMT solvers are essential in software engineering. They can be found in various tools for proving, testing, model checking, abstract interpretation, and typing. The main reason for this success is that they are becoming increasingly efficient and the underlying theories are becoming more and more expressive. It is a very competitive area of research among the world’s best universities and research labs, as well as large IT companies. But there is still a lot of room for improvement, particularly in the nonlinear arithmetic domain, where user demand is growing. For now, one of my research objectives is to combine Model Checking tools with program verification. These two types of tools are based on SMT and should complement each other to offer even more automation to verification tools.</p>
<h4>What applications can SMT techniques and Alt-Ergo have in industry?</h4>
<p>SMT techniques can be used wherever formal methods are useful. Including, but not limited to verifying the safety of critical software in embedded systems, finding security vulnerabilities in computer systems, or resolving planning problems. They can also be found in domains of artificial intelligence, where it is crucial to guarantee neural network stability and produce formal explanations of their results.</p>
<h4>You ended up working on Model Checking, can you tell us about how Model Checking is connected to SMT and how it is currently used?</h4>
<p>Model Checking consists of verifying that all possible states of a system respect certain properties, regardless of the input data. This is a difficult problem because some systems (like microprocessors for example) can have hundreds of millions of states. To reach that scale, model checkers implement extremely sophisticated algorithms to visit these states quickly by storing them in a compact manner. That said, this technique reaches its limits when the input values are unbounded or when the number of system components is unknown. Imagine Internet routing algorithms where you don’t know how many machines are connected. These algorithms must be correct no matter the number of machines. This is where SMT solvers come into play. By using logical formulas, we’re able to represent sets of states of arbitrary sizes. Visiting system states becomes calculating the formulas that represent the states satisfying the desired properties, etc. Therefore, everything in Model Checking is based on logical formulas, and SMT solvers are of course there to reason about these formulas.</p>
[Interview] Sylvain Conchon rejoint OCamlProhttps://ocamlpro.com/blog/2020_06_05_fr_interview_sylvain_conchon_rejoint_ocamlpro2020-06-05T08:12:13Z2020-06-05T08:12:13Z
Aurore Dombry
Sylvain Conchon vient de rejoindre OCamlPro en tant que Chief Scientific Officer Méthodes Formelles. Professeur à l’Université Paris-Saclay, il travaille dans le domaine de la démonstration automatique pour la preuve de programmes et le model checking pour systèmes paramétrés. Il est aussi ...<p><img src="/blog/assets/img/picture_sylvainconchon.jpg" alt="" /></p>
<blockquote>
<p>Sylvain Conchon vient de rejoindre OCamlPro en tant que Chief Scientific Officer Méthodes Formelles. Professeur à l’Université Paris-Saclay, il travaille dans le domaine de la démonstration automatique pour la preuve de programmes et le model checking pour systèmes paramétrés. Il est aussi le co-créateur d’Alt-Ergo.</p>
</blockquote>
<h3>Recherche et industrie</h3>
<p><strong>Sylvain, tu fréquentes de longue date le monde industriel,
que penses-tu des interactions entre les industriels et les laboratoires
de recherche ?</strong></p>
<p>J’ai toujours trouvé très enrichissantes les interactions avec les
industriels. Pendant mes études, j’ai travaillé plusieurs années en
SSII, et je suis mes étudiants en stage ou en apprentissage dans des
sociétés technologiques ou chez de grands industriels. Je participe
également à des projets de recherche qui impliquent des industriels,et
j’ai passé quelques temps chez Intel à Portland, ce qui m’a permis de
découvrir l’industrie du hardware.</p>
<p><strong>Comment parvenir à établir des relations fructueuses entre le monde académique et les industriels ?</strong></p>
<p>C’est beaucoup une histoire de rencontre. On le voit lors des
montages de projets de recherche collaboratifs qui réunissent
académiques et industriels. Les outils issus de la recherche, quels
qu’ils soient, doivent avant tout répondre à un besoin réel des
industriels. Si c’est le cas, il faut aussi que le logiciel soit
utilisable par des ingénieurs du métier sans qu’il leur soit nécessaire
de comprendre son fonctionnement interne (par exemple, pour positionner
les 50 options nécessaires à son utilisation, interpréter ses résultats
ou ses absences de résultats!). Cela nécessite à l’évidence un travail
d’ingénierie important, tourné vers l’utilisateur final et souvent
éloigné des activités des chercheurs. Il faut donc comprendre les
problèmes et les besoins des industriels, et ensuite déterminer si les
technologies et les outils que l’on maîtrise peuvent être adaptés ou
utilisés pour réaliser un prototype qui réponde à certains de ces
besoins.</p>
<p><strong>Tu viens de rejoindre OCamlPro, quelles sont tes premières impressions ?</strong></p>
<p>Je suis heureux d’avoir rejoint une entreprise très dynamique, pleine
de gens talentueux, motivés et sympathiques, où l’on fait à la fois de
l’ingénierie de haut niveau et de la recherche de qualité !</p>
<blockquote>
<p><em>“ Les outils issus de la recherche, quels qu’ils soient, doivent avant tout répondre à un besoin réel des industriels.”</em></p>
</blockquote>
<h3>OCaml, un langage de pointe</h3>
<p><strong>Tu es connu dans la communauté OCaml, et certains de
tes étudiants sont devenus des fans d’OCaml (et de ton enseignement)…
que dis-tu à tes étudiants qui découvrent OCaml ?</strong></p>
<p>J’ai tendance à résumer en disant ceci : <em>« avec OCaml, vous n’apprenez pas la programmation des 10 dernières années, mais celle des 10 prochaines années »</em>. Cette affirmation s’est toujours vérifiée car bon nombre de traits du langage OCaml se sont retrouvés dans les langages <em>mainstream</em>, avec plusieurs années de décalage. Cela dit, mes années d’expérience dans l’enseignement de ce langage me laissent penser que quelques modifications dans sa syntaxe permettraient une approche plus aisée pour certains débutants.</p>
<p><strong>Et toi, comment as-tu découvert OCaml ?</strong></p>
<p>Pendant mes études à l’Université lors de mon projet de fin de
maîtrise : un de mes enseignants m’avait orienté vers ce langage pour
m’aider à réaliser un compilateur pour un langage de programmation
concurrente. J’ai donc découvert ce langage par moi-même, en lisant le
manuel et les exemples. Ce n’est que pendant mon DEA que j’ai découvert
les fondements théoriques de ce beau langage (sémantique, typage,
compilation).</p>
<p><strong>OCaml, un langage industriel ou pas encore ?</strong></p>
<p>Il convient de préciser la question : qu’est-ce qu’un langage
industriel ? Si c’est un langage utilisé par les industriels, alors
OCaml n’est hélas pas encore suffisamment utilisé dans l’industrie pour
être qualifié ainsi. Si la question est de savoir s’il a le niveau des
langages utilisés dans l’industrie, alors la réponse est oui, sans
hésiter. Mais peut-être la question porte-t-elle davantage sur
l’écosystème OCaml et la maturité de l’outillage: il y a sûrement des
progrès à faire pour atteindre le niveau d’un langage très répandu dans
l’industrie, mais c’est en bonne voie, en particulier grâce à des
entreprises telles qu’OCamlPro.</p>
<h3>Les méthodes formelles comme technique industrielle, et l’exemple du solveur Alt-Ergo</h3>
<p><strong>Les méthodes formelles sont l’un des domaines d’expertise d’OCamlPro, en quoi penses-tu qu’OCaml est adapté au domaine des SMT ?</strong></p>
<p>Les outils comme les solveurs SMT sont principalement des logiciels
de manipulation symbolique des données qui permettent d’analyser, de
transformer et de raisonner sur des formules logiques. OCaml est fait
pour ce genre de traitements. Il y a aussi une partie plus «
calculatoire » dans ces outils qui nécessite une programmation fine des
structures de données ainsi qu’une gestion efficace de la mémoire. OCaml
est particulièrement adapté pour ce genre de développements, surtout
avec son ramasse-miettes (GC) extrêmement performant. Enfin, les
solveurs SMT sont des outils qui doivent avoir un grand niveau de
fiabilité car les erreurs dans ces logiciels sont difficiles à trouver
et leur présence peut être très préjudiciable. Le système de types
d’OCaml contribue à la fiabilité de ces outils.</p>
<blockquote>
<p><em>“Les solveurs SMT sont aujourd’hui incontournables dans le domaine de l’ingénierie du logiciel.”</em></p>
</blockquote>
<p><strong>Peux-tu nous parler d’Alt-Ergo en quelques mots ?</strong></p>
<p>C’est un logiciel utilisé pour prouver automatiquement (sans
intervention humaine) des formules logiques, c’est-à-dire savoir si ces
formules sont vraies ou fausses. Alt-Ergo appartient à une famille de
démonstrateurs automatiques appelée SMT (pour Satisfiabilité Modulo
Théories). Il a été conçu pour être intégré dans des plate-formes de
vérification de programmes. Ces outils (comme Why3, Frama-C, Spark,…)
génèrent des formules logiques qu’il est nécessaire de prouver afin de
garantir qu’un programme est sûr. Faire la preuve de ces formules à la
main serait très fastidieux (il y a parfois plusieurs dizaines de
milliers de formules à prouver). Un solveur SMT comme Alt-Ergo est là
pour faire ce travail, de manière complètement automatique. C’est ce qui
permet à ces plateformes de vérification d’être utilisables au niveau
industriel.</p>
<p><strong>En quoi le développement d’Alt-Ergo en OCaml peut-il être un avantage par rapport aux concurrents ?</strong></p>
<p>Cela lui confère une plus grande sûreté, car un solveur SMT, comme
n’importe quel programme peut aussi avoir des bugs. La plus grande
partie d’Alt-Ergo est programmée dans un style purement fonctionnel,
c’est-à-dire uniquement avec l’utilisation de structures de données
immuables. L’un des avantages de ce style de programmation est qu’il
nous a permis de prouver formellement ses principaux composants (par
exemple, son noyau a été formalisé à l’aide de l’assistant à la preuve
Coq, ce qui serait impossible à faire dans un langage comme C++), sans
sacrifier son efficacité grâce au très bon ramasse-miettes et à la
bibliothèque de structures de données persistantes très performantes
d’OCaml. Enfin, nous avons largement bénéficié du système de modules
d’OCaml, en particulier les foncteurs et les modules récursifs, pour
concevoir un code très modulaire, maintenable et facilement extensible.
Au final, OCaml nous a permis de concevoir un solveur SMT aussi
performant que CVC4 ou Z3 pour la preuve de programmes, mais avec un
nombre de lignes de code divisé par trois ou quatre. Bien sûr, cela ne
garantit pas que Alt-Ergo ait zéro bugs, mais cela nous aide beaucoup à
mettre le doigt dessus quand quelqu’un en trouve.</p>
<p><em>“OCaml nous a permis de concevoir un solveur SMT aussi performant que CVC4 ou Z3 pour la
preuve de programmes, mais avec un nombre de lignes de code divisé par
trois ou quatre.“</em></p>
<p><strong>Quel est ton avis sur les solveurs SMT et l’état de l’art SMT actuel ?</strong></p>
<p>Les solveurs SMT sont aujourd’hui incontournables dans le domaine de
l’ingénierie du logiciel. On les trouve aussi bien dans des outils de
preuve, de test, de model checking, d’interprétation abstraite ou encore
de typage. La principale raison de ce succès est qu’ils sont de plus en
plus efficaces et les théories sous-jacentes sont très expressives.
C’est un domaine de recherche très concurrentiel entre les meilleures
universités ou laboratoires du monde et de grandes entreprises en
informatique. Mais la marge de progression de ces outils est encore très
grande, en particulier dans le domaine de l’arithmétique non linéaire
où la demande des utilisateurs est de plus en plus forte. Pour le
moment, un de mes objectifs en recherche est de combiner les outils de
Model Checking avec ceux de preuve de programmes. Ces deux familles
d’outils reposent sur les SMT et elles devraient se compléter pour
offrir des outils de vérification encore plus automatiques.</p>
<p><strong>Quelles applications les techniques SMT et Alt-Ergo peuvent-elles avoir dans l’industrie ?</strong></p>
<p>Les techniques SMT peuvent être utilisées partout où les méthodes
formelles peuvent être utiles. Par exemple (mais cette liste est loin
d’être exhaustive), pour vérifier la sûreté de logiciels critiques dans
le domaine de l’embarqué, pour trouver des failles de sécurité dans les
systèmes informatiques ou pour résoudre des problèmes de planification.
On les trouve également dans le domaine de l’intelligence artificielle
où il est crucial de garantir la stabilité des réseaux de neurones mais
aussi de produire des explications formelles sur leurs résultats.</p>
<p><strong>Tu as été amené à travailler sur le Model Checking, peux-tu
nous parler des liens entre Model Checking et SMT et de son utilisation
actuelle ?</strong></p>
<p>Le Model Checking consiste à vérifier que tous les états possibles
d’un système respectent bien certaines propriétés, et ce quelles que
soient les données en entrée. C’est un problème difficile car certains
systèmes (microprocesseurs par ex.) peuvent avoir des centaines de
millions d’états. Pour passer à l’échelle, les model checkers
implémentent des algorithmes très perfectionnés pour visiter ces états
rapidement, en les stockant d’une manière très compacte. Cependant,
cette technique atteint ses limites quand les valeurs prises en entrée
sont non bornées ou quand le nombre de composants du système n’est pas
connu. Pensez aux algorithmes de routage d’Internet où on ne connaît pas
le nombre de machines sur le réseau, ces algorithmes doivent être
corrects, quel que soit ce nombre de machines. C’est là que les solveurs
SMT entrent en jeu. En utilisant des formules logiques, on peut
représenter des ensembles d’états de taille arbitraire. Visiter les
états d’un système consiste alors à calculer les formules qui
représentent ces états. Vérifier que les états respectent une propriété
revient à prouver que les formules qui représentent des états impliquent
la propriété voulue, etc. Tout dans le Model Checking repose donc sur
des formules logiques et les solveurs SMT sont évidemment là pour
raisonner sur ces formules.</p>
Tutoriel Formathttps://ocamlpro.com/blog/2020_06_01_fr_tutoriel_format2020-06-01T08:12:13Z2020-06-01T08:12:13Z
OCamlPro
Article écrit par Mattias. Le module Format d’OCaml est un module extrêmement puissant mais malheureusement très mal utilisé. Il combine notamment deux éléments distincts : les boîtes d’impression élégante
les tags sémantiques Le présent article vise à démystifier une grande partie ...<p><em>Article écrit par Mattias.</em></p>
<p>Le module <a href="http://caml.inria.fr/pub/docs/manual-ocaml/libref/Format.html">Format</a> d’OCaml est un module extrêmement puissant mais malheureusement très mal utilisé. Il combine notamment deux éléments distincts :</p>
<ul>
<li>les boîtes d’impression élégante
</li>
<li>les tags sémantiques
</li>
</ul>
<p>Le présent article vise à démystifier une grande partie de ce module afin de découvrir l’ensemble des choses qu’il est possible de faire avec.</p>
<p>Si tout va bien vous devriez passer de</p>
<p><img src="/blog/assets/img/error1-output.png" alt="sortie triviale" /></p>
<p>à</p>
<p><img src="/blog/assets/img/ocaml-output.png" alt="sortie OCaml" /></p>
<p>(En réalité nous arriverons à un résultat légèrement différent car l’auteur de ce tutoriel n’aime pas tous les choix faits pour afficher les messages d’erreur en OCaml mais les différences n’auront pas de grande importance)</p>
<h2>I. Introduction générale : <code>fprintf fmt "%a" pp_error e</code></h2>
<p>Si vous ne comprenez pas ce que le code dans le titre doit faire, je
vous invite à lire attentivement ce qui va suivre. Sinon vous pouvez
directement sauter à la deuxième partie.</p>
<h3>I.1. Rappels sur <code>printf</code></h3>
<p>Pour rappel, la fonction <code>printf</code> est une fonction variadique (c’est-à-dire qu’elle peut prendre un nombre variable de paramètres).</p>
<ul>
<li>
<p>Le premier paramètre est une chaîne de formattage composée de caractères et de spécificateurs de format.</p>
<ul>
<li>Les <strong>caractères</strong> sont affichés tels quels. <code>printf "abc"</code> affichera <code>abc</code>.
</li>
<li>Les <strong>spécificateurs de caractère</strong> sont des caractères précédés du caractère <code>% </code>(syntaxe héritée du C). Ils sont remplacés à l’exécution par un des paramètres fournis après la chaîne de formattage à la fonction et servent à indiquer de quel type doit être la valeur qui sera affichée (ainsi que d’autres informations dont les détails peuvent être trouvés dans la documentation du module <a href="https://caml.inria.fr/pub/docs/manual-ocaml/libref/Printf.html">Printf</a>. <code>printf "Test: %d"</code> attend un entier signé et affichera <code>Test: <d></code> avec <code><d></code> remplacé par l’entier fourni.
</li>
</ul>
</li>
<li>
<p>Les paramètres suivants sont les valeurs fournies à <code>printf</code> pour remplacer les spécificateurs de format</p>
<ul>
<li><code>printf "%d %s %c" 3 s 'a'</code> affichera l’entier signé 3, une espace insécable, le contenu de la variable <code>s</code> qui doit être une chaîne de caractères, une autre espace insécable et finalement le caractère ‘a’.
</li>
<li>On remarque aussi qu’ici le nombre de paramètres supplémentaires fournis en plus de la chaîne de formattage correspond au nombre de spécificateurs et que ceux-ci ne peuvent être intervertis. <code>printf "%d %c" 'a' 3</code> ne pourra pas être compilé/exécuté car <code>%d</code> attend un entier signé et le premier paramètre est un caractère. Les spécificateurs qui n’attendent qu’un argument sont des spécificateurs que j’appelle <strong>unaires</strong> et sont extrêmement faciles à utiliser, il faut seulement savoir quel caractère correspond à quel type et les donner dans le bon ordre comme
illustré dans la figure ci-dessous (le chevron représentant la sortie standard)
</li>
</ul>
</li>
</ul>
<p><img src="/blog/assets/img/printf-base-out-dark.png" alt="Fonctionnement basique de printf" /></p>
<h3>I.2. Afficher un type défini par l’utilisateur</h3>
<p>Arrive alors ce moment où vous commencez à définir vos propres structures de données et, malheureusement, il n’y a aucun moyen d’afficher votre expression avec les spécificateurs par défaut (ce qui semble normal). Définissons donc notre propre type et affichons-le avec les techniques déjà vues :</p>
<pre><code class="language-OCaml">type error =
| Type_Error of string * string
| Apply_Non_Function of string
let pp_error = function
| Type_Error (s1, s2) -> printf "Type is %s instead of %s" s1 s2
| Apply_Non_Function s -> printf "Type is %s, this is not a function" s
</code></pre>
<p>Supposons maintenant que nous ayons une liste d’erreurs et que nous souhaitions les afficher en les séparant par une ligne horizontale. Une première solution serait la suivante :</p>
<pre><code class="language-OCaml">let pp_list l =
List.iter (fun e ->
pp_error e;
printf "\n"
) l
</code></pre>
<p>Cette façon de faire a plusieurs inconvénients (qui vont être magiquement réglés par la fonction du titre).</p>
<h3>I.3. Afficher sur un <code>formatter</code> abstrait</h3>
<p>Le premier inconvénient est que <code>printf</code> envoie son résultat vers la sortie standard alors qu’on peut vouloir l’envoyer vers un fichier ou vers la sortie d’erreur, par exemple.</p>
<p>La solution est <code>fprintf</code> (il serait de bon ton de feindre la surprise ici).</p>
<p><code>fprintf</code> prend un paramètre supplémentaire avant la chaîne de formattage appelé <strong><code>formatter</code> abstrait</strong>. Ce paramètre est du type <code>formatter</code> et représente un imprimeur élégant (ou <em>pretty-printer</em>)</p>
<p>c’est-à-dire l’objet vers lequel le résultat devra être envoyé. L’énorme avantage qui en découle est qu’on peut transformer beaucoup de choses en <code>formatter</code>. Un fichier, un buffer, la sortie standard etc. À vrai dire, <code>printf</code> est implémenté comme <code>let printf = fprintf std_formatter</code></p>
<p>Pour l’utiliser on va donc modifier <code>pp_error</code> et lui donner un paramètre supplémentaire :</p>
<pre><code class="language-OCaml">let pp_error fmt = function
| Type_Error (s1, s2) -> fprintf fmt "Type is %s instead of %s" s1 s2
| Apply_Non_Function s -> fprintf fmt "Type is %s, this is not a function" s
</code></pre>
<p>Puis on réécrit <code>pp_list</code> pour prendre cela en compte :</p>
<pre><code class="language-OCaml">let pp_list fmt l =
List.iter (fun e ->
pp_error fmt e;
fprintf fmt "\n"
) l
</code></pre>
<p>Comme on peut le voir dans la figure ci-dessous, <code>fprintf</code> imprime dans le <code>formatter</code> qui lui est fourni en paramètre et non plus sur la sortie standard.</p>
<p><img src="/blog/assets/img/fprintf-base-out-dark.png" alt="Fontionnement basique de fprintf" /></p>
<p>Si on veut maintenant afficher le résultat sur la sortie standard il suffira simplement de donner <code>pp_list std_formatter</code> comme <code>formatter</code> à <code>fprintf</code>. Cette façon de faire n’a, en réalité, que des avantages, puisqu’elle permet d’être beaucoup plus fexible quant au <code>formatter</code> qui sera utilisé à l’exécution du programme.</p>
<h3>I.4. Afficher des types complexes avec <code>%a</code></h3>
<p>Le deuxième problème arrivera bien assez vite si nous continuons avec cette méthode. Pour bien le comprendre, reprenons <code>pp_error</code>. Dans le cas de <code>Type_error of string * string</code> on veut écrire <code>Type is s1 instead of s2</code> et on fournit donc à <code>fprintf</code> la chaîne de formattage <code>"Type is %s instead of %s"</code> avec <code>s1</code> et <code>s2</code> en paramètres supplémentaires. Comment devrions-nous faire si <code>s1</code> et <code>s2</code> étaient des types définis par l’utilisateur avec chacun leur fonction d’affichage <code>pp_s1`` : formatter -> s1 -> unit</code> et <code>pp_s2 : formatter -> s2 -> unit</code> ? En suivant la logique de notre solution jusqu’ici, nous écririons le code suivant :</p>
<pre><code class="language-OCaml">let pp_error fmt = function
| Type_Error (s1, s2) ->
fprintf fmt "Type is ";
pp_s1 fmt s1;
fprintf fmt "instead of ";
pp_s2 fmt s2
| Apply_non_function s -> fprintf fmt "Type is %s, this is not a function" s
</code></pre>
<p>Il est assez facile de se rendre compte rapidement que plus nous devrons manipuler des types complexes, plus cette syntaxe s’alourdira. Tout cela parce que les spécificateurs de caractère unaires ne permettent de manipuler que les types de base d’OCaml.</p>
<p>C’est là qu’entre en jeu <code>%a</code>. Ce spécificateur de caractère est, lui, binaire (ternaire en réalité mais un de ses paramètres est déjà fourni). Ses paramètres sont :</p>
<ul>
<li>Une fonction d’affichage de type <code>formatter -> 'a -> unit</code> (premier paramètre devant être fourni)
</li>
<li>Le <code>formatter</code> dans lequel il doit afficher son résultat (qui ne doit pas être fourni en plus)
</li>
<li>La valeur qu’on souhaite afficher
</li>
</ul>
<p>Il appliquera ensuite le <code>formatter</code> et la valeur à la fonction fournie comme premier argument et lui donner la main pour qu’elle affiche ce qu’elle doit dans le <code>formatter</code> qui lui a été fourni en paramètre. Lorsqu’elle aura terminé, l’impression continuera. L’exemple suivant montre le fonctionnement (avec une impression sur la sortie standard, <code>fmt</code> ayant été remplacé par <code>std-formatter</code></p>
<p><img src="/blog/assets/img/fprintfpa-base-out-dark.png" alt="" /></p>
<p>Dans notre cas nous avions déjà transformé nos fonctions d’affichage pour qu’elles prennent un <code>formatter</code> abstrait et nous n’avons donc presque rien à modifier :</p>
<pre><code class="language-OCaml">let pp_error fmt = function
| Type_Error (s1, s2) -> fprintf fmt "Type is %s instead of %s" s1 s2
| Apply_Non_Function s -> fprintf fmt "Type is %s, this is not a function" s
let pp_list fmt l =
List.iter (fun e ->
fprintf fmt "%a\n" pp_error e;
) l
</code></pre>
<p>Et, bien sûr, si <code>s1</code> et <code>s2</code> avaient eu leurs propres fonctions d’affichage :</p>
<pre><code class="language-ocaml">let pp_error fmt = function
| Type_Error (s1, s2) -> fprintf fmt "Type is %a instead of %a" pp_s1 s1 pp_s2 s2
| Apply_Non_Function s -> fprintf fmt "Type is %s, this is not a function" s
</code></pre>
<p>Arrivé-e-s ici vous devriez être à l’aise avec les notions de <code>formatter</code> abstrait et de spécificateur de caractère binaire et vous devriez donc pouvoir afficher n’importe quelle structure de donnée, même récursive, sans aucun soucis. Je recommande vivement cette façon de faire afin que tout changement qui devrait succéder ne nécessite pas de changer l’intégralité du code.</p>
<h2>II. Les boîtes d’impression élégante</h2>
<p>Et pour justement avoir des changements qui ne nécessitent pas de tout modifier, il va falloir s’intéresser un minimum aux boîtes d’impression élégante.</p>
<p>Aussi appelées <em>pretty-print boxes</em>, je les appellerai “boîtes” dorénavant, un <a href="https://ocaml.org/learn/tutorials/format.fr.html">tutoriel</a> existe déjà, fait par l’équipe de la bibliothèque standard.</p>
<p>L'idée derrière les boîtes est tout simple :</p>
<blockquote>
<p>À mon niveau je m’occupe correctement de comment afficher mes éléments et je n’impose rien au-dessus.</p>
</blockquote>
<p>Reprenons, par exemple, la fonction permettant d’afficher les <code>error</code>:</p>
<pre><code class="language-ocaml">let pp_error fmt = function
| Type_Error (s1, s2) -> fprintf fmt "Type is %s instead of %s" s1 s2
| Apply_Non_Function s -> fprintf fmt "Type is %s, this is not a function" s
</code></pre>
<p>Si on ajoutait un retour à la ligne on imposerait à toute fonction nous appelant ce saut de ligne or ce n’est pas à nous d’en décider. Cette fonction, en l’état, fait parfaitement ce qu’elle doit faire.</p>
<p>Regardons, par contre, la fonction affichant une liste d’erreur :</p>
<pre><code class="language-ocaml">let pp_list fmt l =
List.iter (fun e ->
fprintf fmt "%a\n" pp_error e;
) l
</code></pre>
<p>A l’issue de celle-ci un saut à ligne provenant du dernier élément est forcé. Non seulement il n’est pas recommandé d’utiliser <code>n</code> (ou <code>@n</code> ou même <code>@.</code>) car ce ne sont pas à proprement parler des directives de <code>Format</code> mais des directives systèmes qui vont donc chambouler le reste de l’impression.</p>
<blockquote>
<p>Malheureusement bien trop de développeurs et développeuses ont découvert <code>@.</code> en même temps que <code>Format</code> et s’en servent sans restriction. Au risque de me répéter souvent : n’utilisez pas <code>@.</code> !</p>
</blockquote>
<h3>II.1. Le spécificateur <code>@</code></h3>
<p>On l’avait vu, une chaîne de formattage est composée de caractères et de spécificateurs de caractères commençant par <code>%</code> Les spécificateurs sont des caractères qui ne sont pas affichés et qui seront remplacés avant l’affichage final.</p>
<p><code>Format</code> ajoute son propre spécificateur de caractère : <code>@</code>.</p>
<h4>II.1.a. Le vidage (<em>flush</em>)</h4>
<p>La première spécification qu’on a vue est donc celle qu’il ne faut presque jamais utiliser (ce qui pose la question de l’avoir mentionnée en premier lieu) : <code>@.</code>. Cette spécification indique seulement au moteur d’impression qu’à ce niveau là il faut sauter une ligne et vider l’imprimeur. Les deux autres spécifications semblables sont <code>@n</code> qui n’indique que le saut de ligne et <code>@?</code> qui n’indique que le vidage de l’imprimeur. L’inconvénient de ces trois spécificateurs est qu’ils sont trop puissants et chamboulent donc le bon fonctionnement du reste de l’impression. Je n’ai personnellement jamais utilisé <code>@n</code> (autant utiliser une boîte avec un spécificateur de coupure comme nous le verrons immédiatement après) et n’utilise <code>@. </code>que lorsque je sais qu’il ne reste rien à imprimer.</p>
<h4>II.1.b. Les indications de coupure ou d’espace</h4>
<p>Important :</p>
<ul>
<li>Une indication de coupure saute à la ligne s’il le faut sinon elle ne fait rien
</li>
<li>Une indication d’espace sécable saute à la ligne s’il le faut, sinon elle affiche une espace
</li>
</ul>
<p>Les deux sont donc des indications de saut de ligne si nécessaire, il
n’existe pas d’indication d’espace par défaut ou rien s’il n’y a pas
assez d’espace (utiliser <code> </code> affichera toujours une espace).</p>
<p>Les indications sont au nombre de trois (et leur fonctionnement sera bien plus clair lorsque vous verrez les boîtes) :</p>
<ul>
<li><code>@,</code> : indication de coupure (c’est-à-dire rien prioritairement ou un saut à la ligne s’il le faut)
</li>
<li><code>@⎵</code> : indique une espace sécable (c’est-à-dire une espace prioritairement
ou un saut à la ligne s’il le faut) (Il faut bien évidemment comprendre
le caractère <code>⎵</code> comme l’espace blanc habituel)
</li>
<li><code>@;<n o></code> : indique <code>n</code> espaces sécables ou une coupure indentée de <code>o</code> (c’est-à-dire <code>n</code> espaces sécables prioritairement ou un saut à la ligne <strong>avec une indentation supplémentaire de <code>o</code></strong> s’il le faut)
</li>
</ul>
<p>D’après ce que je viens d’écrire il devrait être évident maintenant que le caractère est une espace insécable qui ne provoquera donc pas de saut à la ligne quand bien même on dépasserait les limites de celle-ci. Contrairement à nos espaces de traitement de texte habituel qui sont des espaces sécables (pouvant provoquer des sauts de ligne), il faut spécifier quels espaces sont sécables lorsqu’on utilise Format.</p>
<p>On écrira par exemple <code>fprintf fmt "let rec f =@ %a" pp_expr e</code> car on ne veut pas que <code>let rec f =</code> soit séparé en plusieurs lignes mais on met bien <code>@⎵</code> avant <code>%a</code> car l’expression sera soit sur la même ligne si suffisament petite soit à la ligne suivante si trop grande (on devrait même écrire <code>@;<1 2></code> pour que l’expression soit indentée si on saute à la ligne suivante mais, on va le voir immédiatement, c’est là que les boîtes nous permettent d’automatiser ce genre de comportement)</p>
<h4>II.1.c. Les boîtes</h4>
<p>La deuxième spécification est celle permettant d’ouvrir et de fermer des boîtes.</p>
<p>Une boîte se commence par <code>@[</code> et se termine par <code>@]</code>. Entre ces deux bornes, on fait ce qu’on veut (<strong>sauf utiliser <code>@.</code>, <code>@?</code> ou <code>@\n</code> !</strong>). Tout ce qui se passe à l’intérieur de la boîte reste (et doit rester) à l’intérieur de celle-ci. Indentation, coupures, boîtes verticales, horizontales, les deux, l’une ou l’autre, toutes ces options sont accessibles une fois qu’une boîte a été ouverte. Voyons-les rapidement (pour rappel, la version détaillée est disponible dans le <a href="https://ocaml.org/learn/tutorials/format.fr.html">tutoriel</a>.</p>
<p>Une fois qu’une boîte a été ouverte on peut préciser entre deux chevrons le comportement qu’on veut qu’elle ait en cas d’indication de coupure, en voici un rapide aperçu :</p>
<ul>
<li><code><v></code> : Toute indication de coupure entraîne un saut à la ligne
</li>
<li><code><h></code> : Toute indication d’espace entraîne une espace, les indications de coupure n’ont aucun effet
</li>
<li><code><hv></code>: Si toute la boîte peut être imprimée sur la même ligne alors seules les indications d’espace sont prises en compte sinon seules les indications de coupure le sont et chaque élément est imprimé sur sa propre ligne
</li>
<li><code><hov></code> : Tant que des éléments peuvent être imprimés sur une ligne ils le sont avec leurs indications d’espace. Les indications de coupure sont utilisées lorsqu’il faut sauter une ligne.
</li>
</ul>
<p>Chacun de ces comportements peut se voir attribuer une valeur supplémentaire, sa valeur d’indentation, qui indique l’indentation par rapport au début de la boîte qui devra être ajoutée à chaque saut de ligne.</p>
<p>Soit le code suivant permettant d’afficher une liste d’items séparés soit par une indication de coupure <code>@,</code>, soit par une indication d’espace <code>@⎵</code> soit par une indication d’espace ou de coupure indentée <code>@;<2 3></code> (2 espaces ou une coupure indentée de trois espaces) :</p>
<pre><code class="language-ocaml">open Format
let l = ["toto"; "tata"; "titi"]
let pp_item fmt s = fprintf fmt "%s" s
let pp_cut fmt () = fprintf fmt "@,"
let pp_spc fmt () = fprintf fmt "@ "
let pp_brk fmt () = fprintf fmt "@;<2 3>"
let pp_list pp_sep fmt l =
pp_print_list pp_item ~pp_sep fmt l
</code></pre>
<p>Voici un récapitulatif des différents comportements de boîtes en fonction des indications de coupure/espace rencontrées :</p>
<pre><code class="language-ocaml">(* Boite verticale (tout est coupure) *)
printf "------------@.";
printf "v@.";
printf "------------@.";
printf "@[<v 2>[%a]@]@." (pp_list pp_cut) l;
printf "@[<v 2>[%a]@]@." (pp_list pp_spc) l;
printf "@[<v 2>[%a]@]@." (pp_list pp_brk) l;
(* Sortie attendue:
------------
v
------------
[toto
tata
titi]
[toto
tata
titi]
[toto
tata
titi]
*)
(* Boîte horizontale (pas de coupure) *)
printf "------------@.";
printf "h@.";
printf "------------@.";
printf "@[<h 2>[%a]@]@." (pp_list pp_cut) l;
printf "@[<h 2>[%a]@]@." (pp_list pp_spc) l;
printf "@[<h 2>[%a]@]@." (pp_list pp_brk) l;
(* Sortie attendue:
------------
h
------------
[tototatatiti]
[toto tata titi]
[toto tata titi]
*)
(* Boîte horizontale-verticale
(Affiche tout sur une ligne si possible sinon boîte verticale) *)
printf "------------@.";
printf "hv@.";
printf "------------@.";
printf "@[<hv 2>[%a]@]@." (pp_list pp_cut) l;
printf "@[<hv 2>[%a]@]@." (pp_list pp_spc) l;
printf "@[<hv 2>[%a]@]@." (pp_list pp_brk) l;
(* Sortie attendue:
------------
hv
------------
[toto
tata
titi]
[toto
tata
titi]
[toto
tata
titi]
*)
(* Boîte horizontale ou verticale tassante
(Affiche le maximum possible sur une ligne avant de sauter à la
ligne suivante et recommencer) *)
printf "------------@.";
printf "hov@.";
printf "------------@.";
printf "@[<hov 2>[%a]@]@." (pp_list pp_cut) l;
printf "@[<hov 2>[%a]@]@." (pp_list pp_spc) l;
printf "@[<hov 2>[%a]@]@." (pp_list pp_brk) l;
(* Sortie attendue:
------------
hov
------------
[tototata
titi]
[toto tata
titi]
[toto
tata
titi]
*)
(* Boîte horizontale ou verticale structurelle
(Même fonctionnement que la boîte tassante sauf pour le dernier
retour à la ligne qui tente de favoriser une indentation de
niveau 0) *)
printf "------------@.";
printf "b@.";
printf "------------@.";
printf "@[<b 2>[%a]@]@." (pp_list pp_cut) l;
printf "@[<b 2>[%a]@]@." (pp_list pp_spc) l;
printf "@[<b 2>[%a]@]@." (pp_list pp_brk) l
(* Sortie attendue:
------------
b
------------
[tototata
titi]
[toto tata
titi]
[toto
tata
titi]
*)
</code></pre>
<p>Petite précision sur l’utilisation ici des <code>@.</code> alors qu’il est recommandé de ne jamais les utiliser. Il ne faut en réalité pas <strong>jamais</strong> les utiliser, il faut seulement les utiliser lorsqu’on est sûr de n’être dans aucune boîte. Ici, par exemple, on souhaite marquer distinctement les différentes impressions de boîtes, il est donc tout à fait correct d’utiliser <code>@.</code> étant donné qu’on est sûr d’être au dernier niveau d’impression (rien au-dessus) et de ne pas casser une passe d’impression élégante. Il serait donc bien plus précis de dire</p>
<blockquote>
<p>Il ne faut pas utiliser <code>@.</code>, <code>@n</code> et <code>@?</code> dans des impressions qui sont ou seront potentiellement imbriquées</p>
</blockquote>
<p>Mais il est bien plus simple pour commencer de ne jamais les utiliser quitte à les rajouter après.</p>
<p>Le comportement de la boîte <code>b</code> (boîte structurelle) semble être le même que celui de la boîte <code>hov</code> (boîte tassante) mais il se trouve des cas où les deux diffèrent (généralement lorsqu’un saut de ligne réduit l’indentation courante, la boîte structurelle saute à la ligne même s’il reste de la place sur la ligne courante). Je vous invite à consulter le <a href="https://ocaml.org/learn/tutorials/format.fr.html">tutoriel</a> pour plus de précisions (je dois aussi avouer que leur fonctionnement est très proche de ce qu’on pourrait appeler “opaque” étant donné qu’en fonction de la taille de marge le comportement attendu aura lieu ou non. L’auteur de ce tutoriel tient à préciser qu’il utilise plutôt des boîtes verticales avec une indentation nulle s’il lui arrive de vouloir obtenir le comportement des boîtes structurelles, un exemple est fourni lors de l’affichage en HTML à la fin de ce document).</p>
<h3>II.2. Récapitulatif</h3>
<ul>
<li>Il faut utiliser des boîtes
</li>
<li>Les indications de vidage fermant toutes les boîtes, il ne faut surtout pas les utiliser dans des fonctions d’affichage internes, il faut se limiter aux indications de
coupure et d’espace
</li>
<li>Il faut vraiment utiliser des boîtes
</li>
</ul>
<p>Vous voilà armé-e-s pour utiliser Format dans sa version la plus simple, avec des boîtes, de l’indentation, des indications de coupure et d’espace.</p>
<p>Reprenons notre affichage d’erreur :</p>
<pre><code class="language-ocaml">let pp_error fmt = function
| Type_Error (s1, s2) -> fprintf fmt "@[<hov 2>Type is %s@ instead of %s@]" s1 s2
| Apply_non_function s -> fprintf fmt "@[<hov 2>Type is %s,@ this is not a function@]" s
let pp_list fmt l =
pp_print_list pp_error fmt l
</code></pre>
<p>On a encapsulé l’affichage des deux erreurs dans des boîtes <code>hov</code> avec une indication d’espace sécable au milieu et utilisé la fonction <code>pp_print_list</code> du module <a href="https://caml.inria.fr/pub/docs/manual-ocaml/libref/Format.html">Format</a></p>
<p>Si je tente maintenant d’afficher une liste d’erreurs dans deux environnements, un de 50 colonnes et l’autre de 25 colonnes de largeur avec le code suivant :</p>
<pre><code class="language-ocaml">let () =
let e1 = Type_Error ("int", "bool") in
let e2 = Apply_non_function ("int") in
let e3 = Type_Error ("int", "float") in
let e4 = Apply_non_function ("bool") in
let el = [e1; e2; e3; e4] in
pp_set_margin std_formatter 50;
fprintf std_formatter "--------------------------------------------------@.";
fprintf std_formatter "@[<v 0>%a@]@." pp_list el;
pp_set_margin std_formatter 25;
fprintf std_formatter "-------------------------@.";
fprintf std_formatter "@[<v 0>%a@]@." pp_list el;
</code></pre>
<p>J’obtiens le résultat suivant :</p>
<pre><code class="language-ocaml">--------------------------------------------------
Type is int instead of bool
Type is int, this is not a function
Type is int instead of float
Type is bool, this is not a function
-------------------------
Type is int
instead of bool
Type is int,
this is not a function
Type is int
instead of float
Type is bool,
this is not a function
</code></pre>
<p>Ce qu’on rajoute en verbosité on le gagne en élégance. Et en parlant d’élégance, ça manque de couleurs.</p>
<h2>III. Les tags sémantiques</h2>
<p>Cette partie n’est pas présente dans le tutoriel mais dans un <a href="https://hal.archives-ouvertes.fr/hal-01503081/file/format-unraveled.pdf">article tutoriel</a> qui l’explique assez rapidement.</p>
<p>La troisième spécification, donc (après celles de coupure et de boîtes), est la spécification de tag sémantique : <code>@{</code> pour en ouvrir un et <code>@}</code> pour le fermer.</p>
<h3>III.1. Marquer son texte</h3>
<p>Mais avant de comprendre leur fonctionnement, cherchons à comprendre leur intérêt. Que vous souhaitiez afficher dans un terminal, dans une page html ou autre, il y a de fortes chances que cette sortie accepte les marques de texte comme l’italique, la coloration etc. Utilisateur d’emacs et d’un <a href="https://en.wikipedia.org/wiki/ANSI_escape_code">terminal ANSI</a> je peux modifier l’apparence de mon texte grâce aux codes ANSI :</p>
<p><img src="/blog/assets/img/exemple-ansiterm.png" alt="Exemple de marquage de texte dans un terminal ANSI" /></p>
<p>Si je crée un programme OCaml qui affiche cette chaîne de charactère et que je l’exécute directement dans mon terminal je devrais obtenir le même résultat :</p>
<p><img src="/blog/assets/img/exemple-ansiocaml-bad.png" alt="Exemple de marquage de texte dans un terminal ANSI depuis un programme OCaml qui se passe mal" /></p>
<p>Naturellement, ça ne fonctionne pas, si l’informatique était standardisée et si tout le monde savait communiquer ça se saurait. Il s’avère que le caractère <code>033</code> est interprété en octal par les terminaux ANSI mais en décimal par OCaml (ce qui semble être l’interprétation normale). OCaml permet de représenter un <a href="https://caml.inria.fr/pub/docs/manual-ocaml/lex.html#sss:character-literals">caractère</a> selon plusieurs séquences d’échappement différentes :</p>
<table><thead><tr><th>Séquence</th><th>Caractère résultant</th></tr></thead><tbody><tr><td><code>DDD</code></td><td>le caractère correspondant au code ASCII <code>DDD</code> en décimal</td></tr><tr><td><code>xHH</code></td><td>le caractère correspondant au code ASCII <code>HH</code> en hexadécimal</td></tr><tr><td><code>oOOO</code></td><td>le caractère correspondant au code ASCII <code>OOO</code> en octal</td></tr></tbody></table>
<p>On peut donc écrire au choix</p>
<pre><code class="language-ocaml">let () = Format.printf "\027[36mBlue Text \027[0;3;30;47mItalic WhiteBG Black Text"
let () = Format.printf "\x1B[36mBlue Text \x1B[0;3;30;47mItalic WhiteBG Black Text"
let () = Format.printf "\o033[36mBlue Text \o033[0;3;30;47mItalic WhiteBG Black Text"
</code></pre>
<p>Dans tous les cas, on obtient le résultat suivant :</p>
<p><img src="/blog/assets/img/exemple-ansiocaml-good.png" alt="Exemple de marquage de texte dans un terminal ANSI depuis un programme OCaml qui se passe bien" /></p>
<p>Que se passe-t-il, par contre, si j’exécute une de ces lignes dans un terminal non ANSI ? En testant sur <a href="https://try.ocamlpro.com/">TryOCaml</a>:</p>
<p><img src="/blog/assets/img/tryocaml-ansi.png" alt="Exemple de marquage de texte dans un navigateur depuis TryOCaml" /></p>
<p>On ne veut surtout pas que ce genre d’affichage puisse arriver. Il faudrait donc pouvoir s’assurer que le marquage du texte soit actif uniquement quand on le décide. L’idée de créer deux chaînes de formattage en fonction de notre capacité ou non à afficher du texte marqué n’est clairement pas une bonne pratique de programmation (changer
une formulation demande de changer deux chaînes de formattage, le code est difficilement maintenable). Il faudrait donc un outil qui puisse faire un pré-traitement de notre chaîne de formattage pour lui ajouter des décorations.</p>
<p>Cet outil est déjà fourni par Format, ce sont les tags sémantiques.</p>
<h3>III.2 Les tags sémantiques</h3>
<p>Introduits par <code>@{</code> et fermés par <code>@}</code>, comme les boîtes ils sont paramétrés par la construction <code><t></code> pour indiquer l’ouverture (et la fermeture) du tag <code>t</code>. Contrairement aux boîtes, les tags n’ont aucune signification pour l’imprimeur (on peut faire l’analogie avec les types de base d’OCaml que sont <code>int</code>, <code>bool</code>, <code>float</code> etc et les types définis par le programmeur ou la programmeuse (<code>type t = A | B</code>, par exemple. Les types de base ont déjà une quantité de fonctions qui leurs sont associés alors que les types définis ne signifient rien tant qu’on n’écrit pas les fonctions qui les manipuleront). L’avantage premier de ces tags est donc que, n’ayant aucune signification, ils sont tout simplement ignorés par l’imprimeur lors de l’affichage de notre chaîne de caractère finale:</p>
<p><img src="/blog/assets/img/exemple-stag-tryocaml.png" alt="Exemple de marquage avec un tag sémantique de texte dans un navigateur depuis TryOCaml" /></p>
<p>Par défaut, l’imprimeur ne traite pas les tags sémantiques (ce qui
permet d’avoir un comportement d’affichage aussi simple que possible par
défaut). Le traitement des tags sémantiques peut être activé pour
chaque <code>formatter</code> indépendamment avec les fonctions <code>val pp_set_tags : formatter -> bool -> unit</code>, <code>val pp_set_print_tags : formatter -> bool -> unit</code> et <code>val pp_set_mark_tags : formatter -> bool -> unit</code> dont on verra les effets immédiatement. Voyons déjà ce qui se passe avec la fonction générale <code>pp_set_tags</code> qui combine les deux suivantes :</p>
<p><img src="/blog/assets/img/exemple-stag-actif-tryocaml.png" alt="Exemple de traitement du marquage avec un tag sémantique de texte dans un navigateur depuis TryOCaml" /></p>
<p>Que s’est-il passé ?</p>
<p>Une fois que le traitement des tags sémantiques est activé, quatre
opérations vont être effectuées à chaque ouverture et fermeture de tag :</p>
<ul>
<li><code>print_open_stag</code> suivie de <code>mark_open_stag</code> pour chaque tag <code>t</code> ouvert avec <code>@{<t></code>
</li>
<li><code>mark_close_stag</code> suivie de <code>print_close_stag</code> pour chaque tag <code>t</code> fermé avec <code>@}</code> correspondant à la dernière ouverture <code>@{<t></code>
</li>
</ul>
<p>Regardons les signatures de ces quatre opérations :</p>
<pre><code class="language-ocaml">type formatter_stag_functions = {
mark_open_stag : stag -> string;
mark_close_stag : stag -> string;
print_open_stag : stag -> unit;
print_close_stag : stag -> unit;
}
</code></pre>
<p>Les fonctions <code>mark_*_stag</code> prennent un tag sémantique en paramètre et renvoie une chaîne de caractères quand les fonctions <code>print_*_stag</code> prennent le même paramètre mais ne renvoient rien. La raison derrière est en réalité toute simple :</p>
<ul>
<li>Les fonctions de marquage écrivent directement dans la cible d’affichage (le terminal, le fichier ou autre)
</li>
<li>Les fonctions d’affichage écrivent dans le <code>formatter</code> qui les traite comme des chaînes de caractères normales qui peuvent donc entraîner des sauts de ligne, des coupures, de nouvelles boîtes etc
</li>
</ul>
<p>Une indication de couleur pour un terminal ANSI n’apparaît pas à l’affichage, le texte se retrouve coloré, il semble donc naturel de ne pas vouloir que cette indication ait un effet sur l’impression élégante. En revanche, si on voulait avoir une sortie vers un fichier LaTeX ou HTML, cette indication de couleur apparaîtraît et devrait donc avoir une influence sur l’impression élégante.</p>
<p>Il est donc assez simple de savoir dans quel cas on veut utiliser <code>print_*_stag</code> ou <code>mark_*_stag</code> :</p>
<ul>
<li>Si le tag doit avoir un impact immédiat sur l’apparence du texte affiché (couleur, taille, décorations…) et non pas son contenu, il faut utiliser <code>mark_*_stag</code>
</li>
<li>Si le tag doit avoir un impact sur le contenu du texte affiché et non pas sur son apparence, il faut utiliser <code>print_*_stag</code>
</li>
<li>Si le tag doit avoir un impact à la fois sur le contenu et l’apparence du texte affiché alors il faut utiliser les deux en séparant bien entre contenu géré par <code>print_*_stag</code> et apparence gérée par <code>mark_*_stag</code>
</li>
</ul>
<p>Ces quatres fonctions ont chacune un comportement par défaut que voici :</p>
<pre><code class="language-ocaml">let mark_open_stag = function
| String_tag s -> "<" ^ s ^ ">"
| _ -> ""
let mark_close_stag = function
| String_tag s -> "</" ^ s ^ ">"
let print_open_stag = ignore
let print_close_stag = ignore
</code></pre>
<p>Le type <code>stag</code> est un type somme extensible (introduits dans <a href="https://ocaml.org/releases/4.02.html">OCaml 4.02.0</a>) c’est-à-dire qu’il est défini de la sorte</p>
<pre><code class="language-ocaml">type stag = ..
type stag += String_tag of string
</code></pre>
<p>Par défaut seuls les <code>String_tag of string</code> sont donc reconnus comme des tags sémantiques (ce sont aussi les seuls qui peuvent être obtenus par la construction <code>@{<t> ... @}</code>, ici <code>t</code> sera traité comme <code>String_tag t</code>) ce qui est illustré par le comportement par défaut de <code>mark_open_tag</code> et <code>mark_close_tag</code>. Ce comportement par défaut nous permet aussi de comprendre ce qui est arrivé ici :</p>
<p><img src="/blog/assets/img/exemple-stag-actif-tryocaml_002.png" alt="Exemple de traitement du marquage avec un tag sémantique de texte dans un navigateur depuis TryOCaml" /></p>
<p>N’ayant pas personnalisé les opérations de manipulation des tags, leur comportement par défaut a été exécuté, ce qui revient à afficher directement le tag entre chevrons sans passer par le <code>formatter</code>. Il faut donc définir les comportements voulus pour nos tags (attention, ne manipulant que des chaînes de caractère, toute erreur est conséquemment difficile à identifier et corriger, il vaut mieux donc éviter les célèbres <code>| _ -> ()</code> — il faudrait en réalité les éviter tout le temps si possible mais c’est une autre histoire).</p>
<p>Commençons donc par définir nos tags et ce à quoi on veut qu’ils correspondent :</p>
<pre><code class="language-ocaml">open Format
type style =
| Normal
| Italic
| Italic_off
| FG_Black
| FG_Blue
| FG_Default
| BG_White
| BG_Default
let close_tag = function
| Italic -> Italic_off
| FG_Black | FG_Blue | FG_Default -> FG_Default
| BG_White | BG_Default -> BG_Default
| _ -> Normal
let style_of_tag = function
| String_tag s -> begin match s with
| "n" -> Normal
| "italic" -> Italic
| "/italic" -> Italic_off
| "fg_black" -> FG_Black
| "fg_blue" -> FG_Blue
| "fg_default" -> FG_Default
| "bg_white" -> BG_White
| "bg_default" -> BG_Default
| _ -> raise Not_found
end
| _ -> raise Not_found
</code></pre>
<p>Maintenant que chaque tag possible est géré, il nous faut les associer à leur valeur (ANSI dans ce cas) et implémenter nos propres fonctions de marquages (et pas d’affichage car a priori ces tags n’ont aucun effet sur le contenu du texte affiché) :</p>
<pre><code class="language-ocaml">(* See https://en.wikipedia.org/wiki/ANSI_escape_code#SGR_parameters for some values *)
let to_ansi_value = function
| Normal -> "0"
| Italic -> "3"
| Italic_off -> "23"
| FG_Black -> "30"
| FG_Blue -> "34"
| FG_Default -> "39"
| BG_White -> "47"
| BG_Default -> "49"
let ansi_tag = Printf.sprintf "\x1B[%sm"
let start_mark_ansi_stag t = ansi_tag @@ to_ansi_value @@ style_of_tag t
let stop_mark_ansi_stag t = ansi_tag @@ to_ansi_value @@ close_tag @@ style_of_tag t
</code></pre>
<p>On se le rappelle, l’ouverture d’un tag ANSI se fait avec la séquence d’échappement <code>x1B</code> suivie de une ou plusieurs valeurs de tags séparées par <code>;</code> entre <code>[</code> et <code>m</code>. Dans notre cas chaque tag n’est associé qu’à une valeur mais il serait tout à fait possible d’avoir un <code>Error -> "1;4;31"</code> qui imposerait un affichage gras, souligné et en rouge. Tant que la chaîne de caractère renvoyée au terminal correspond bien à une séquence de marquage ANSI tout est possible.</p>
<p>Il faut ensuite faire en sorte que ces fonctions soient celles utilisées par le <code>formatter</code> lors de leur traitement :</p>
<pre><code class="language-ocaml">let add_ansi_marking formatter =
let open Format in
pp_set_mark_tags formatter true;
let old_fs = pp_get_formatter_stag_functions formatter () in
pp_set_formatter_stag_functions formatter
{ old_fs with
mark_open_stag = start_mark_ansi_stag;
mark_close_stag = stop_mark_ansi_stag }
</code></pre>
<p>On utilise la fonction <code>pp_set_mark_tags</code> (au lieu de <code>pp_set_tags</code>) car on ne se sert pas de <code>print_*_stags</code> et on associe aux fonctions <code>mark_*_stag</code> les fonctions <code>*_ansi_stag</code>.</p>
<p>Il ne nous reste plus qu’à faire en sorte que les tags sémantiques soient traités et avec nos fonctions avant d’afficher notre chaîne de caractères :</p>
<pre><code class="language-ocaml">let () =
add_ansi_marking std_formatter;
Format.printf "@{<fg_blue>Blue Text @}@{<italic>@{<bg_white>@{<fg_black>Italic WhiteBG BlackFG Text@}@}@}"
</code></pre>
<p>Et l’affichage dans le terminal sera bien celui voulu :</p>
<p><img src="/blog/assets/img/ansi-color-term-stag.png" alt="Exemple de marquage avec la gestion des tags sémantiques par Format dans un terminal ANSI" /></p>
<p>Si le programme doit être affiché dans un terminal non ANSI il suffit simplement d’enlever la ligne <code>add_ansi_marking std_formatter;</code> :</p>
<p><img src="/blog/assets/img/ansi-color-try-stag.png" alt="Exemple de marquage avec la gestion des tags sémantiques par Format dans un terminal ANSI" /></p>
<p>On pourrait aussi faire en sorte que notre texte puisse être envoyé vers un document HTML.</p>
<p>Il faut déjà changer les valeurs associées aux tags (on voit ici l’utilisation de boîtes verticales à indentation nulle mentionnée lors du paragraphe sur les boîtes structurelles) :</p>
<pre><code class="language-ocaml">let to_html_value fmt =
let fg_color c = Format.fprintf fmt {|@[<v 0>@[<v 2><span style="color:%s;">@,|} c in
let bg_color c = Format.fprintf fmt {|@[<v 0>@[<v 2><span style="background-color:%s;">@,|} c in
let close_span () = Format.fprintf fmt "@]@,</span>@]" in
let default = Format.fprintf fmt in
fun t -> match t with
| Normal -> ()
| Italic -> default "<i>"
| Italic_off -> default "</i>"
| FG_Black -> fg_color "black"
| FG_Blue -> fg_color "blue"
| FG_Default -> close_span ()
| BG_White -> bg_color "white"
| BG_Default -> close_span ()
</code></pre>
<p>La construction <code>{| ... |}</code> permet d’avoir des chaînes de caractères sans les caractères spéciaux <code>"</code> et `` ce qui permet d’écrire <code>{|"This is a nice "|}</code> sans espacer ces caractères.</p>
<p>De même, la construction</p>
<pre><code class="language-ocaml">let fonction arg1 ... argn =
let expr1 = ... in
...
let exprn = ... in
fun argn1 ... argnm ->
</code></pre>
<p>Permet de définir des expressions internes à une fonction qui
dépendent des arguments fournis avant et donc, dans le cas d’une
application partielle, de calculer cet environnement une seule fois.
Dans le cas de la fonction <code>to_html_value</code> je pourrai donc créer la nouvelle application partielle <code>let to_html_value_std = to_html_value std_formatter</code> qui contiendra donc directment les implémentations de <code>fg_color</code>, <code>bg_color</code>, <code>close_span</code> et <code>default</code> pour <code>std_formatter</code>.</p>
<p>Contrairement au cas du terminal ANSI, ce qui changera sera le
contenu et non pas l’apparence du texte, nous utiliserons donc les
fonctions <code>print_*_stag</code>. C’est pourquoi nos fonctions doivent directement écrire dans le <code>formatter</code> et non pas renvoyer une chaîne de caractères.</p>
<p>Les fonctions d’ouverture et de fermeture ne changent pas énormément :</p></p>
<pre><code class="language-ocaml">let start_print_html_stag fmt t =
to_html_value fmt @@ style_of_tag t
let stop_print_html_stag fmt t =
to_html_value fmt @@ close_tag @@ style_of_tag t
</code></pre>
<p>On associe ensuite ces fonctions aux fonctions <code>print_*_stag</code> :</p>
<pre><code class="language-ocaml">let add_html_printings formatter =
let open Format in
pp_set_mark_tags formatter false;
pp_set_print_tags formatter true;
let old_fs = pp_get_formatter_stag_functions formatter () in
pp_set_formatter_stag_functions formatter
{ old_fs with
print_open_stag = start_print_html_stag formatter;
print_close_stag = stop_print_html_stag formatter}
</code></pre>
<p>On en profite pour désactiver le marquage sur le formatter passé en
paramètre. Cela évite d’avoir de mauvaises surprises au cas où il aurait
été activé précédemment (il aurait fallu faire de même lors du marquage
pour le terminal ANSI).</p>
<p>Finalement, l’appel à :</p>
<pre><code class="language-ocaml">let () =
add_html_printings std_formatter;
Format.printf "@[<v 0>@{<fg_blue>Blue Text @}@,@{<italic>@{<bg_white>@{<fg_black>Italic WhiteBG BlackFG Text@}@}@}@]@."
</code></pre>
<p>Nous donne le résultat attendu :</p>
<pre><code class="language-html"><span style="color:blue;">
Blue Text
</span>
<i>
<span style="background-color:white;">
<span style="color:black;">
Italic WhiteBG BlackFG Text
</span>
</span>
</i>
</code></pre>
<h2>Conclusion</h2>
<p>Nous voici arrivés à la fin de ce tutoriel qui, je l’espère, vous
permettra d’appréhender le module Format avec bien plus de sérénité.</p>
<p>Dans les possibilités non présentées ici mais qu’il est intéressant d’avoir en mémoire :</p>
<ul>
<li>Possibilité de redéfinir intégralement toutes les fonctions d’affichage définies dans l’enregistrement :
</li>
</ul>
<pre><code class="language-html"><span class="hljs-keyword">type</span>
formatter_out_functions = {
out_string :
<span class="hljs-built_in">string</span> -> <span class="hljs-built_in">int</span> -> <span class="hljs-built_in">int</span> -> <span class="hljs-built_in">unit</span>;
out_flush :
<span class="hljs-built_in">unit</span> -> <span class="hljs-built_in">unit</span>;
out_newline :
<span class="hljs-built_in">unit</span> -> <span class="hljs-built_in">unit</span>;
out_spaces :
<span class="hljs-built_in">int</span> -> <span class="hljs-built_in">unit</span>;
out_indent :
<span class="hljs-built_in">int</span> -> <span class="hljs-built_in">unit</span>;
}
</code></pre>
<ul>
<li>Possibilité de transformer n’importe quel sortie en un formatter pour écrire directement dedans sans avoir à passer par des chaînes de caractère intermédiaire (notamment la fonction <code>val formatter_of_buffer : Buffer.t -> formatter</code> qui permet directement d’écrire dans un buffer
</li>
<li>L’impression élégante symbolique qui imprime de façon symbolique donc permet de voir directement quelles directives seront envoyées au <code>formatter</code> à l’impression. Très utile pour débuguer en cas d’impression cacophonique mais aussi extrêmement puissant pour effectuer une phase de post-traitement (par exemple si on veut ajouter un symbole à chaque début de ligne)
</li>
<li>Les fonctions utiles qu’il ne faut pas oublier d’utiliser (je sais que les devs OCaml aiment réinventer la roue mais il existe déjà des fonctions pour afficher des listes, des options et les résultats <code>Ok _ | Error _</code>) :
</li>
</ul>
<pre><code class="language-ocaml">val pp_print_list : ?pp_sep:(formatter -> unit -> unit) -> (formatter -> 'a -> unit) -> formatter -> 'a list -> unit
(* Affiche une liste dont chaque élément est séparé par le séparateur par défaut `@,` ou celui fourni *)
val pp_print_option : ?none:(formatter -> unit -> unit) -> (formatter -> ‘a -> unit) -> formatter -> ‘a option -> unit
(* Affiche le contenu d’une option en cas de Some contenu et rien par défaut si None ou l’affichage fourni *)</p>
val pp_print result : ok:(formatter -> ‘a -> unit) -> error:(formatter -> ‘e -> unit) -> formatter -> (‘a, ‘e) result -> unit
(* Affiche le contenu d’un result. Les arguments ne sont ici pas optionnels et conditionnent l’affichage en cas de Ok </em> et de Error _ *)
</code></pre>
<ul>
<li>Enfin, une pelletée de fonctions à la <code>printf</code> telles que, donc :
</li>
<li><code>fprintf</code> que nous avons déjà vue
</li>
<li><code>dprintf</code> qui permet de retarder l'évaluation de l'impression et donc de ne pas calculer des impressions qui ne seront jamais faites
</li>
<li><code>ifprintf</code> qui n'affiche rien (utile lorsqu'on veut avoir la même signature que <code>fprintf</code> mais en étant sûr que rien ne sera fait)
</li>
</ul>
<p>Sources :</p>
<ul>
<li>
<p>Tutoriel du site OCaml</p>
</li>
<li>
<p>Richard Bonichon, Pierre Weis. Format Unraveled. 28ièmes Journées Francophones des LangagesApplicatifs, Jan 2017, Gourette, France. hal-01503081</p>
</li>
</ul>
<p>Codes sources :</p>
<p>Code LaTeX correspondant à <code>printf</code></p>
<pre><code class="language-latex">\documentclass[tikz,border=10pt]{standalone}
\usepackage{tikz}
\usetikzlibrary{math}
\usetikzlibrary{tikzmark}
\usepackage{xcolor}
\pagecolor[rgb]{0,0,0}
\color[rgb]{1,1,1}
\colorlet{color1}{blue!50!white}
\colorlet{color2}{red!50!white}
\colorlet{color3}{green!50!black}
\begin{document}
\begin{tikzpicture}[remember picture]
\node [align=left,font=\ttfamily] at (0,0) {
let s = "toto" in\[2em]
printf "{color{color1}\tikzmarknode{scd}{\%d}}
{color{color2}\tikzmarknode{scc}{\%c}}
{color{color3}\tikzmarknode{scs}{\%s}}"
{\color{color1}\tikzmarknode{d}{3}}
{\color{color2}\tikzmarknode{c}{'c'}}
{\color{color3}\tikzmarknode{s}{s}}\\[2em]
> "3 c toto"
};
\draw[<-, color1] (scd.north) -- ++(0,0.5) -| (d);
\draw[<-, color2] (scc.south) -- ++(0,-0.4) -| (c);
\draw[<-, color3] (scs.north) -- ++(0,0.4) -| (s);
\end{tikzpicture}
end{document}
</code></pre>
<p>Code LaTeX correspondant à <code>fprintf</code>:</p>
<pre><code class="language-latex">\documentclass[tikz,border=10pt]{standalone}
\usepackage{tikz}
\usetikzlibrary{math}
\usetikzlibrary{decorations.pathreplacing,tikzmark}
\usepackage{xcolor}
\pagecolor[rgb]{0,0,0}
\color[rgb]{1,1,1}
\colorlet{color1}{blue!50!white}
\colorlet{color2}{red!50!white}
\colorlet{color3}{green!50!black}
\begin{document}
\begin{tikzpicture}[remember picture]
\node [align=left,font=\ttfamily] at (0,0) {
let s = "toto" in\\[2em]
fprintf \tikzmarknode{fmt}{fmt} \tikzmarknode{str}{"{\color{color1}\tikzmarknode{scd}{\%d}}
{\color{color2}\tikzmarknode{scc}{\%c}}
{\color{color3}\tikzmarknode{scs}{\%s}}"}
{\color{color1}\tikzmarknode{d}{3}}
{\color{color2}\tikzmarknode{c}{'c'}}
{\color{color3}\tikzmarknode{s}{s}}\\[2em]
> \\
(* fmt <- "3 c toto" *)
};
\draw[<-, color1] (scd.north) -- ++(0,0.5) -| (d);
\draw[<-, color2] (scc.south) -- ++(0,-0.3) -| (c);
\draw[<-, color3] (scs.north) -- ++(0,0.4) -| (s);
\draw[decorate,decoration={brace, amplitude=5pt, raise=10pt},yshift=-2cm] (str.south east) -- (str.south west) node[midway, yshift=-13pt](a){} ;
\draw[->, white] (a.south) -- ++(0,-0.1) -| (fmt);
\end{tikzpicture}
\end{document}
</code></pre>
<p>Code LaTeX correspondant à <code>fprintf</code> avec utilisation de <code>%a</code></p>
<pre><code class="language-latex">\documentclass[tikz,border=10pt]{standalone}
\usepackage{tikz}
\usetikzlibrary{math}
\usetikzlibrary{decorations.pathreplacing,tikzmark}
\usepackage{xcolor}
\pagecolor[rgb]{0,0,0}
\color[rgb]{1,1,1}
\colorlet{color1}{blue!50!white}
\colorlet{color2}{red!50!white}
\colorlet{color3}{green!50!black}
\begin{document}
\begin{tikzpicture}[remember picture]
\node [align=left,font=\ttfamily] at (0,0) {
let s = "toto" in\\[2em]
type expr = \{i: int; j: int\}\\
let pp\_expr fmt {i; j} = fprintf fmt "<\%d, \%d> i j" in\\[2em]
fprintf \tikzmarknode{fmt}{std\_formatter} \tikzmarknode{str}{"{\color{color1}\tikzmarknode{scd}{\%d}}
{\color{color2}\tikzmarknode{sca}{\%a}}
{\color{color3}\tikzmarknode{scs}{\%s}}"}
{\color{color1}\tikzmarknode{d}{3}}
{\color{color2}\tikzmarknode{ppe}{pp\_expr}}
{\color{color2}\tikzmarknode{e}{\{i=1; j=2\}}}
{\color{color3}\tikzmarknode{s}{s}}\\[2em]
> "3 <1, 2> toto"
};
\draw[<-, color1] (scd.north) -- ++(0,0.5) -| (d);
\draw[<-, color2] (sca.south) -- ++(0,-0.3) -| (ppe);
\draw[<-, color2] (sca.65) -- ++(0,0.3) -| (e);
\draw[->, color2] (fmt.north) -- ++(0,0.2) -| (sca.115);
\draw[<-, color3] (scs.south) -- ++(0,-0.4) -| (s);
\draw[decorate,decoration={brace, amplitude=5pt, raise=12pt},yshift=-2cm] (str.south east) -- (str.south west) node[midway, yshift=-13pt](a){} ;
\draw[->, white] (a.south) -- ++(0,-0.1) -| (fmt);
\end{tikzpicture}
\end{document}
</code></pre>
A Solidity parser in OCaml with Menhirhttps://ocamlpro.com/blog/2020_05_19_ocaml_solidity_parser_with_menhir2020-05-19T08:12:13Z2020-05-19T08:12:13Z
David Declerck
This article is cross-posted on Origin Labs’ Dune Network blog We are happy to announce the first release of our Solidity parser, written in OCaml using Menhir. This is a joint effort with Origin Labs, the company dedicated to blockchain challenges, to implement a full interpreter for the Solidity...<p align="center" >
<a href="/blog/2020_05_19_ocaml_solidity_parser_with_menhir">
<img width="420" height="420" alt="Solidity Logo" title="A Solidity parser in OCaml with Menhir" src="/blog/assets/img/solidity-cover.png">
</a>
</p>
<br />
<blockquote>
<p>This article is cross-posted on Origin Labs’ Dune Network <a href="https://medium.com/dune-network/a-solidity-parser-in-ocaml-with-menhir-e1064f94e76b">blog</a></p>
</blockquote>
<p>We are happy to announce the first release of <a href="https://github.com/OCamlPro/ocaml-solidity">our Solidity parser</a>, written in OCaml using <a href="http://gallium.inria.fr/~fpottier/menhir/">Menhir</a>. This is a joint effort with <a href="https://www.origin-labs.com/">Origin Labs</a>, the company dedicated to blockchain challenges, to implement a full interpreter for the <a href="https://solidity.readthedocs.io/en/v0.6.8/">Solidity language</a> directly in a blockchain.</p>
<p><img src="/blog/assets/img/logo_solidity_title.png" alt="Solidity Logo" /></p>
<p>Solidity is probably the most popular language for smart-contracts, small pieces of code triggered when accounts receive transactions on a blockchain.Solidity is an object-oriented strongly-typed language with a Javascript-like syntax.</p>
<p><img src="/blog/assets/img/logo_ethereum_title.png" alt="Ethereum Logo" /></p>
<p>Solidity was first implemented for the <a href="https://ethereum.org/">Ethereum</a> blockchain, with a compiler to the EVM, the Ethereum Virtual Machine.</p>
<p><img src="/blog/assets/img/logo_dune_title.png" alt="Dune Network Logo" /></p>
<p>Dune Network takes a different approach, as Solidity smart-contracts will be executed natively, after type-checking. Solidity will be the third native language on Dune Network, with <a href="https://dune.network/docs/dune-node-mainnet/whitedoc/michelson.html">Michelson</a>, a low-level strongly-typed language inherited from Tezos, and <a href="https://dune.network/docs/dune-node-mainnet/love-doc/introduction.html">Love</a>, an higher-level strongly-typed language, also implemented jointly by OCamlPro and Origin Labs.</p>
<p>A first step has been accomplished, with the completion of the Solidity parser and printer, written in OCaml with Menhir.</p>
<p>This parser (and its printer companion) is now available as a standalone library under the LGPLv3 license with Linking Exception, allowing its integration in all projects. The source code is available at https://gitlab.com/o-labs/solidity-parser-ocaml.</p>
<p>Our parser should support all of Solidity 0.6, with the notable exception of inline assembly (may be added in a future release).</p>
<h2>Example contract</h2>
<p>Here is an example of a very simple contract that stores an integer value and allows the contract’s owner to add an arbitrary value to this value, and any other contract to read this value:</p>
<pre><code class="language-solidity">pragma solidity >=0.6.0 <0.7.0;
contract C {
address owner;
int x;
constructor() public {
owner = msg.sender;
x = 0;
}
function add(int d) public {
require(msg.sender == owner);
x += d;
}
function read_x() public view returns(int) {
return x;
}
}
</code></pre>
<h2>Parser Usage</h2>
<h3>Executable</h3>
<p>Our parser comes with a small executable that demonstrates the library usage. Simply run:</p>
<pre><code class="language-bash">./solp contract.sol
</code></pre>
<p>This will parse the file <code>contract.sol</code> and reprint it on the terminal.</p>
<h3>Library</h3>
<p>To use our parser as a library, add it to your program’s dependencies and use the following function:</p>
<pre><code class="language-ocaml">Solidity_parser.parse_contract_file : string -> Solidity_parser.Solidity_types.module_
</code></pre>
<p>It takes a filename and returns a Solidity AST.</p>
<p>If you wish to print this AST, you may turn it into its string representation by sending it to the following function:</p>
<pre><code class="language-ocaml">Solidity_parser.Printer.string_of_code : Solidity_parser.Solidity_types.module_ -> string
</code></pre>
<h2>Conclusion</h2>
<p>Of course, all of this is Work In Progress, but we are quite happy to share it with the OCaml community. We think there is a tremendous work to be done around blockchains for experts in formal methods. Do not hesitate to contact us if you want to use this library!</p>
<h2>About Origin Labs</h2>
<p>Origin Labs is a company founded in 2019 by the former blockchain team at OCamlPro. At Origin Labs, they have been developing Dune Network, a fork of the Tezos blockchain, its ecosystem, and applications over the Dune Network platform. At OCamlPro, they developed TzScan, the most popular block explorer at the time, Liquidity, a smart contract language, and were involved in the development of the core protocol and node.Do not hesitate to reach out by email: contact@origin-labs.com.</p>
opam 2.1.0 alpha is here!https://ocamlpro.com/blog/2020_04_22_opam_2.1.0_alpha_is_here2020-04-22T08:12:13Z2020-04-22T08:12:13Z
Raja Boujbel
Louis Gesbert
We are happy to announce a alpha for opam 2.1.0, one year and a half in the making after the release of 2.0.0. Many new features made it in (see the complete changelog or release note for the details), but here are a few highlights of this release. Release highlights The two following features have ...<p>We are happy to announce a alpha for opam 2.1.0, one year and a half in the
making after the release of 2.0.0.</p>
<p>Many new features made it in (see the <a href="https://github.com/ocaml/opam/blob/2.1.0-alpha/CHANGES">complete
changelog</a> or <a href="https://github.com/ocaml/opam/releases/tag/2.1.0-alpha">release
note</a> for the details),
but here are a few highlights of this release.</p>
<h2>Release highlights</h2>
<p>The two following features have been around for a while as plugins and are now
completely integrated in the core of opam. No extra installs needed anymore, and
a more smooth experience.</p>
<h3>Seamless integration of System dependencies handling (a.k.a. "depexts")</h3>
<p>A number of opam packages depend on tools or libraries installed on the system,
which are out of the scope of opam itself. Previous versions of opam added a
<a href="http://opam.ocaml.org/doc/Manual.html#opamfield-depexts">specification format</a>,
and opam 2.0 already handled checking the OS and extracting the required system
package names.</p>
<p>However, the workflow generally involved letting opam fail once, then installing
the dependencies and retrying, or explicitely using the
<a href="https://github.com/ocaml/opam-depext">opam-depext plugin</a>, which was invaluable
for CI but still incurred extra steps.</p>
<p>With opam 2.1.0, <em>depexts</em> are seamlessly integrated, and you basically won't
have to worry about them ahead of time:</p>
<ul>
<li>Before applying its course of actions, opam 2.1.0 checks that external
dependencies are present, and will prompt you to install them. You are free to
let it do it using <code>sudo</code>, or just run the provided commands yourself.
</li>
<li>It is resilient to <em>depexts</em> getting removed or out of sync.
</li>
<li>Opam 2.1.0 detects packages that depend on stuff that is not available on your
OS version, and automatically avoids them.
</li>
</ul>
<p>This is all fully configurable, and can be bypassed without tricky commands when
you need it (<em>e.g.</em> when you compiled a dependency yourself).</p>
<h3>Dependency locking</h3>
<p>To share a project for development, it is often necessary to be able to
reproduce the exact same environment and dependencies setting — as opposed to
allowing a range of versions as opam encourages you to do for releases.</p>
<p>For some reason, most other package managers call this feature "lock files".
Opam can handle those in the form of <code>[foo.]opam.locked</code> files, and the
<code>--locked</code> option.</p>
<p>With 2.1.0, you no longer need a plugin to generate these files: just running
<code>opam lock</code> will create them for existing <code>opam</code> files, enforcing the exact
version of all dependencies (including locally pinned packages).</p>
<p>If you check-in these files, new users would just have run
<code>opam switch create . --locked</code> on a fresh clone to get a local switch ready to
build the project.</p>
<h3>Pinning sub-directories</h3>
<p>This one is completely new: fans of the <em>Monorepo</em> rejoice, opam is now able to
handle projects in subtrees of a repository.</p>
<ul>
<li>Using <code>opam pin PROJECT_ROOT --subpath SUB_PROJECT</code>, opam will look for
<code>PROJECT_ROOT/SUB_PROJECT/foo.opam</code>. This will behave as a pinning to
<code>PROJECT_ROOT/SUB_PROJECT</code>, except that the version-control handling is done
in <code>PROJECT_ROOT</code>.
</li>
<li>Use <code>opam pin PROJECT_ROOT --recursive</code> to automatically lookup all sub-trees
with opam files and pin them.
</li>
</ul>
<h3>Opam switches are now defined by invariants</h3>
<p>Previous versions of opam defined switches based on <em>base packages</em>, which
typically included a compiler, and were immutable. Opam 2.1.0 instead defines
them in terms of an <em>invariant</em>, which is a generic dependency formula.</p>
<p>This removes a lot of the rigidity <code>opam switch</code> commands had, with little
changes on the existing commands. For example, <code>opam upgrade ocaml</code> commands are
now possible; you could also define the invariant as <code>ocaml-system</code> and have
its version change along with the version of the OCaml compiler installed
system-wide.</p>
<h3>Configuring opam from the command-line</h3>
<p>The new <code>opam option</code> command allows to configure several options,
without requiring manual edition of the configuration files.</p>
<p>For example:</p>
<ul>
<li><code>opam option jobs=6 --global</code> will set the number of parallel build
jobs opam is allowed to run (along with the associated <code>jobs</code> variable)
</li>
<li><code>opam option depext-run-commands=false</code> disables the use of <code>sudo</code> for
handling system dependencies; it will be replaced by a prompt to run the
installation commands.
</li>
</ul>
<p>The command <code>opam var</code> is extended with the same format, acting on switch and
global variables.</p>
<h2>Try it!</h2>
<p>In case you plan a possible rollback, you may want to first backup your
<code>~/.opam</code> directory.</p>
<p>The upgrade instructions are unchanged:</p>
<ol>
<li>Either from binaries: run
</li>
</ol>
<pre><code class="language-shell-session">$~ bash -c "sh <(curl -fsSL https://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh) --version 2.1.0~alpha"
</code></pre>
<p>or download manually from <a href="https://github.com/ocaml/opam/releases/tag/2.1.0-alpha">the Github "Releases" page</a> to your PATH.</p>
<ol start="2">
<li>Or from source, manually: see the instructions in the <a href="https://github.com/ocaml/opam/tree/2.1.0-alpha#compiling-this-repo">README</a>.
</li>
</ol>
<p>You should then run:</p>
<pre><code class="language-shell-session">opam init --reinit -ni
</code></pre>
<p>This is still a alpha, so a few glitches or regressions are to be expected.
Please report them to <a href="https://github.com/ocaml/opam/issues">the bug-tracker</a>.
Thanks for trying it out, and hoping you enjoy!</p>
<blockquote>
<p>NOTE: this article is cross-posted on
<a href="https://opam.ocaml.org/blog/">opam.ocaml.org</a> and
<a href="/blog">ocamlpro.com</a>.</p>
</blockquote>
opam 2.0.7 releasehttps://ocamlpro.com/blog/2020_04_21_opam_2.0.7_release2020-04-21T08:12:13Z2020-04-21T08:12:13Z
Raja Boujbel
Louis Gesbert
We are pleased to announce the minor release of opam 2.0.7. This new version contains backported small fixes: Escape Windows paths on manpages [#4129 @AltGr @rjbou]
Fix opam installer opam file [#4058 @rjbou]
Fix various warnings [#4132 @rjbou @AltGr - fix #4100]
Fix dune 2.5.0 promote-install-files...<p>We are pleased to announce the minor release of <a href="https://github.com/ocaml/opam/releases/tag/2.0.7">opam 2.0.7</a>.</p>
<p>This new version contains <a href="https://github.com/ocaml/opam/pull/4143">backported</a> small fixes:</p>
<ul>
<li>Escape Windows paths on manpages [<a href="https://github.com/ocaml/opam/pull/4129">#4129</a> <a href="https://github.com/AltGr">@AltGr</a> <a href="https://github.com/rjbou">@rjbou</a>]
</li>
<li>Fix opam installer opam file [<a href="https://github.com/ocaml/opam/pull/4058">#4058</a> <a href="https://github.com/rjbou">@rjbou</a>]
</li>
<li>Fix various warnings [<a href="https://github.com/ocaml/opam/pull/4132">#4132</a> <a href="https://github.com/rjbou">@rjbou</a> <a href="https://github.com/AltGr">@AltGr</a> - fix <a href="https://github.com/ocaml/opam/issues/4100">#4100</a>]
</li>
<li>Fix dune 2.5.0 promote-install-files duplication [<a href="https://github.com/ocaml/opam/pull/4132">#4132</a> <a href="https://github.com/rjbou">@rjbou</a>]
</li>
</ul>
<hr />
<p>Installation instructions (unchanged):</p>
<ol>
<li>From binaries: run
</li>
</ol>
<pre><code class="language-shell-session">bash -c "sh <(curl -fsSL https://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh) --version 2.0.7"
</code></pre>
<p>or download manually from <a href="https://github.com/ocaml/opam/releases/tag/2.0.7">the Github "Releases" page</a> to your PATH. In this case, don't forget to run <code>opam init --reinit -ni</code> to enable sandboxing if you had version 2.0.0~rc manually installed or to update you sandbox script.</p>
<ol start="2">
<li>From source, using opam:
</li>
</ol>
<pre><code class="language-shell-session">opam update; opam install opam-devel
</code></pre>
<p>(then copy the opam binary to your PATH as explained, and don't forget to run <code>opam init --reinit -ni</code> to enable sandboxing if you had version 2.0.0~rc manually installed or to update you sandbox script)</p>
<ol start="3">
<li>From source, manually: see the instructions in the <a href="https://github.com/ocaml/opam/tree/2.0.7#compiling-this-repo">README</a>.
</li>
</ol>
<p>We hope you enjoy this new minor version, and remain open to <a href="https://github.com/ocaml/opam/issues">bug reports</a> and <a href="https://github.com/ocaml/opam/issues">suggestions</a>.</p>
<blockquote>
<p>NOTE: this article is cross-posted on <a href="https://opam.ocaml.org/blog/">opam.ocaml.org</a> and <a href="/blog">ocamlpro.com</a>.</p>
</blockquote>
Le nouveau GC d’OCaml 4.10 : premier aperçu de la stratégie best-fit https://ocamlpro.com/blog/2020_03_24_fr_le_nouveau_gc_docaml_4.10_premier_apercu_de_la_strategie_best_fit2020-03-24T08:12:13Z2020-03-24T08:12:13Z
Thomas Blanc
An in-depth Look at OCaml’s new "Best-fit" Garbage Collector Strategy Le GC d’OCaml oeuvre discrètement à l’efficacité de vos allocations mémoire. Tel un héros de l’ombre, il reste méconnu de la plupart des hackers OCaml. Avec l’arrivée d’OCaml 4.10, il s’enrichit d’une nouvel...<p><a href="/blog/2020_03_23_in_depth_look_at_best_fit_gc"><img src="/blog/assets/img/logo_round_ocaml_search.png" alt="An in-depth Look at OCaml’s new "Best-fit" Garbage Collector Strategy" /></a></p>
<p>Le GC d’OCaml oeuvre discrètement à l’efficacité de vos allocations mémoire. Tel un héros de l’ombre, il reste méconnu de la plupart des
hackers OCaml. Avec l’arrivée d’OCaml 4.10, il s’enrichit d’une nouvelle stratégie apparue dans le <a href="https://ocaml.org/releases/4.10.0.html#Changes">changelog</a>, signée de Damien Doligez.</p>
<p>Dans cet article nous commençons à explorer la nouvelle stratégie baptisée *best-fit *du nouveau Glaneur de Cellules dans OCaml 4.10.</p>
<blockquote>
<p>En savoir plus : <a href="/2020/03/23/ocaml-new-best-fit-garbage-collector/">article en anglais</a>.</p>
</blockquote>
An in-depth Look at OCaml’s new “Best-fit” Garbage Collector Strategyhttps://ocamlpro.com/blog/2020_03_23_in_depth_look_at_best_fit_gc2020-03-23T08:12:13Z2020-03-23T08:12:13Z
Thomas Blanc
An in-depth Look at OCaml’s new "Best-fit" Garbage Collector Strategy The Garbage Collector probably is OCaml’s greatest unsung hero. Its pragmatic approach allows us to allocate without much fear of efficiency loss. In a way, the fact that most OCaml hackers know little about it is a good sign:...<p><a href="/blog/2020_03_23_in_depth_look_at_best_fit_gc"><img src="/blog/assets/img/logo_round_ocaml_search.png" alt="An in-depth Look at OCaml’s new "Best-fit" Garbage Collector Strategy" /></a></p>
<p>The Garbage Collector probably is OCaml’s greatest unsung hero. Its pragmatic approach allows us to allocate without much fear of efficiency loss. In a way, the fact that most OCaml hackers know little about it is a good sign: you want a runtime to gracefully do its job without having to mind it all the time.</p>
<p>But as OCaml 4.10.0 has now hit the shelves, a very exciting feature is <a href="https://ocaml.org/releases/4.10.0.html#Changes">in the changelog</a>:</p>
<blockquote>
<p>#8809, #9292: Add a best-fit allocator for the major heap; still experimental, it should be much better than current allocation policies (first-fit and next-fit) for programs with large heaps, reducing both GC cost and memory usage.
This new best-fit is not (yet) the default; set it explicitly with OCAMLRUNPARAM="a=2" (or Gc.set from the program). You may also want to increase the <code>space_overhead</code> parameter of the GC (a percentage, 80 by default), for example OCAMLRUNPARAM="o=85", for optimal speed.
(Damien Doligez, review by Stephen Dolan, Jacques-Henri Jourdan, Xavier Leroy, Leo White)</p>
</blockquote>
<p>At OCamlPro, some of the tools that we develop, such as the package manager <a href="https://opam.ocaml.org/">opam</a>, the <a href="https://alt-ergo.ocamlpro.com/">Alt-Ergo</a> SMT solver or the Flambda optimizer, can be quite demanding in memory usage, so we were curious to better understand the properties of this new allocator.</p>
<h2>Minor heap and Major heap: the GC in a nutshell</h2>
<p>Not all values are allocated equal. Some will only be useful for the span of local calculations, some will last as long as the program lives. To handle those two kinds of values, the runtime uses a <em>Generational Garbage Collector</em> with two spaces:</p>
<ul>
<li>The minor heap uses the <a href="https://en.wikipedia.org/wiki/Tracing_garbage_collection#Copying_vs._mark-and-sweep_vs._mark-and-don.27t-sweep">Stop-and-copy</a> principle. It is fast but has to stop the computation to perform a full iteration.
</li>
<li>The major heap uses the <a href="https://en.wikipedia.org/wiki/Tracing_garbage_collection#Na%C3%AFve_mark-and-sweep">Mark-and-sweep</a> principle. It has the perk of being incremental and behaves better for long-lived data.
</li>
</ul>
<p>Allocation in the minor heap is straightforward and efficient: values are stored sequentially, and when there is no space anymore, space is emptied, surviving values get allocated in the major heap while dead values are just forgotten for free. However, the major heap is a bit more tricky, since we will have random allocations and deallocations that will eventually produce a scattered memory. This is called <a href="https://en.wikipedia.org/wiki/Fragmentation_(computing)">fragmentation</a>, and this means that you’re using more memory than necessary. Thankfully, the GC has two strategies to counter that problem:</p>
<ul>
<li>Compaction: a heavyweight reallocation of everything that will remove those holes in our heap. OCaml’s compactor is cleverly written to work in constant space, and would be worth its own specific article!
</li>
<li>Free-list Allocation: allocating the newly coming data in the holes (the free-list) in memory, de-scattering it in the process.
</li>
</ul>
<p>Of course, asking the GC to be smarter about how it allocates data makes the GC slower. Coding a good GC is a subtle art: you need to have something smart enough to avoid fragmentation but simple enough to run as fast as possible.</p>
<h2>Where and how to allocate: the 3 strategies</h2>
<p>OCaml used to propose 2 free-list allocation strategies: <em>next-fit</em>, the default, and <em>first-fit</em>. Version 4.10 of OCaml introduces the new <em>best-fit</em> strategy. Let’s compare them:</p>
<h3>Next-fit, the original and remaining champion</h3>
<p>OCaml’s original (and default) “next-fit” allocating strategy is pretty simple:</p>
<ol>
<li>Keep a (circular) list of every hole in memory ordered by increasing addresses;
</li>
<li>Have a pointer on an element of that list;
</li>
<li>When an allocation is needed, if the currently pointed-at hole is big enough, allocate in it;
</li>
<li>Otherwise, try the next hole and so-on.
</li>
</ol>
<p>This strategy is extremely efficient, but a big hole might be fragmented with very small data while small holes stay unused. In some cases, the GC would trigger costly compactions that would have been avoidable.</p>
<h3>First-fit, the unsuccessful contender</h3>
<p>To counteract that problem, the “first-fit” strategy was implemented in 2008 (OCaml 3.11.0):</p>
<ul>
<li>Same idea as next-fit, but with an extra allocation table.
</li>
<li>Put the pointer back at the beginning of the list for each allocation.
</li>
<li>Use the allocation table to skip some parts of the list.
</li>
</ul>
<p>Unfortunately, that strategy is slower than the previous one. This is an example of making the GC smarter ends up making it slower. It does, however, reduce fragmentation. It was still useful to have this strategy at hand for the case where compaction would be too costly (on a 100Gb heap, for instance). An application that requires low latency might want to disable compaction and use that strategy.</p>
<h3>Best-fit: a new challenger enters!</h3>
<p>This leads us to the brand new “best-fit” strategy. This strategy is actually composite and will have different behaviors depending on the size of the data you’re trying to allocate.</p>
<ul>
<li>On small data (up to 32 words), <a href="https://github.com/ocaml/ocaml/blob/trunk/runtime/freelist.c#L868">segregated free lists</a> will allow allocation in (mostly) constant time.
</li>
<li>On big data, a general best-fit allocator based on <a href="https://en.wikipedia.org/wiki/Splay_tree">splay trees</a>.
</li>
</ul>
<p>This allows for the best of the two worlds, as you can easily allocate your numerous small blocks in the small holes in your memory while you take a bit more time to select a good place for your big arrays.</p>
<p>How will best-fit fare? Let’s find out!</p>
<h2>Try it!</h2>
<p>First, let us remind you that this is still an experimental feature, which from the OCaml development team means “We’ve tested it thoroughly on different systems, but only for months and not on a scale as large as the whole OCaml ecosystem”.</p>
<p>That being said, we’d advise you don’t use it in production code yet.</p>
<h3>Why you should try it</h3>
<p>Making benchmarks of this new strategy could be beneficial for you and the language at large: the dev team is hoping for feedback, the more quality feedback <strong>you</strong> give means the more the future GC will be tuned for your needs.</p>
<p>In 2008, the first-fit strategy was released with the hope of improving memory usage by reducing fragmentation. However, the lack of feedback meant that the developers were not aware that it didn’t meet the users’ needs. If more feedback had been given, it’s possible that work on improving the strategy or on better strategies would have happened sooner.</p>
<h3>Choosing the allocator strategy</h3>
<p>Now, there are two ways to control the GC behavior: through the code or through environment variables.</p>
<h4>First method: Adding instructions in your code</h4>
<p>This method should be used by those of us who have code that already does some GC fine-tuning. As early as possible in your program, you want to execute the following lines:</p>
<pre><code class="language-Ocaml">let () =
Gc.(set
{ (get()) with
allocation_policy = 2; (* Use the best-fit strategy *)
space_overhead = 100; (* Let the major GC work a bit less since it's more efficient *)
})
</code></pre>
<p>You might also want to add <code>verbose = 0x400;</code> or <code>verbose = 0x404;</code> in order to get some GC debug information. See <a href="https://caml.inria.fr/pub/docs/manual-ocaml/libref/Gc.html">here</a> for more details on how to use the <code>GC</code> module.</p>
<p>Of course, you’ll need to recompile your code, and this will apply only after the runtime has initialized itself, triggering a compaction in the process. Also, since you might want to easily switch between different allocation policies and overhead specifications, we suggest you use the second method.</p>
<h4>Second method: setting <code>$OCAMLRUNPARAM</code></h4>
<p>At OCamlPro, we develop and maintain a program that any OCaml developer should want to run smoothly. It’s called <a href="https://opam.ocaml.org/">Opam</a>, maybe you’ve heard of it? Though most commands take a few seconds, some <a href="https://opam.ocaml.org/doc/man/opam-admin-check.html">administrative-heavy</a> commands can be a strain on our computer. In other words: those are perfect for a benchmark.</p>
<p>Here’s what we did to benchmark Opam:</p>
<pre><code class="language-shell-session">$ opam update
$ opam switch create 4.10.0
$ opam install opam-devel # or build your own code
$ export OCAMLRUNPARAM='b=1,a=2,o=100,v=0x404'
$ cd my/local/opam-repository
$ perf stat ~/.opam/4.10.0/lib/opam-devel/opam admin check --installability # requires right to execute perf, time can do the trick
</code></pre>
<p>If you want to compile and run your own benchmarks, here are a few details on <code>OCAMLRUNPARAM</code>:</p>
<ul>
<li><code>b=1</code> means “print the backtrace in case of uncaught exception”
</li>
<li><code>a=2</code> means “use best-fit” (default is <code>0</code> , first-fit is <code>1</code>)
</li>
<li><code>o=100</code> means “do less work” (default is <code>80</code>, lower means more work)
</li>
<li><code>v=0x404</code> means “have the gc be verbose” (<code>0x400</code> is “print statistics at exit”, 0x4 is “print when changing heap size”)
</li>
</ul>
<p>See the <a href="https://caml.inria.fr/pub/docs/manual-ocaml/runtime.html#s%3Aocamlrun-options">manual</a> for more details on <code>OCAMLRUNPARAM</code></p>
<p>You might want to compare how your code fares on all three different GC strategies (and fiddle a bit with the overhead to find your best configuration).</p>
<h2>Our results on opam</h2>
<p>Our contribution in this article is to benchmark <code>opam</code> with the different allocation strategies:</p>
<figure><table><thead><tr><td>Strategy:</td><td>Next-fit</td><td>First-fit</td><td colspan="3" scope="colgroup">Best-fit</td></tr><tr><td>Overhead:</td><td>80</td><td>80</td><td>80</td><td>100</td><td>120</td></tr><tr><td>Cycles used (Gcycle)</td><td>2,040</td><td>3,808</td><td>3,372</td><td>2,851</td><td>2,428</td></tr><tr><td>Maximum heap size (kb)</td><td>793,148</td><td>793,148</td><td>689,692</td><td>689,692</td><td>793,148</td></tr><tr><td>User time (s)</td><td>674</td><td>1,350</td><td>1,217</td><td>1,016</td><td>791</td></tr></thead></table></figure>
<p>A quick word on these results. Most of <code>opam</code>‘s calculations are done by <a href="http://www.mancoosi.org/software/">dose</a> and rely heavily on small interconnected blocks. We don’t really have big chunks of data we want to allocate, so the strategy won’t give us the bonus you might have as it perfectly falls into the best-case scenario of the next-fit strategy. As a matter of fact, for every strategy, we didn’t have a single GC compaction happen. However, Best-fit still allows for a lower memory footprint!</p>
<h2>Conclusions</h2>
<p>If your software is highly reliant on memory usage, you should definitely try the new Best-fit strategy and stay tuned on its future development. If your software requires good performance, knowing if your performances are better with Best-fit (and giving feedback on those) might help you in the long run.</p>
<p>The different strategies are:</p>
<ul>
<li>Next-fit: generally good and fast, but has very bad worst cases with big heaps.
</li>
<li>First fit: mainly useful for very big heaps that must avoid compaction as much as possible.
</li>
<li>Best-fit: almost the best of both worlds, with a small performance hit for programs that fit well with next-fit.
</li>
</ul>
<p>Remember that whatever works best for you, it’s still better than having to <code>malloc</code> and <code>free</code> by hand. Happy allocating!</p>
<h1>Comments</h1>
<p>gasche (23 March 2020 at 17 h 50 min):</p>
<blockquote>
<p>What about higher overhead values than 120, like 140, 160, 180 and 200?</p>
</blockquote>
<p>Thomas Blanc (23 March 2020 at 18 h 17 min):</p>
<blockquote>
<p>Because 100 was the overhead value Leo advised in the PR discussion I decided to put it in the results. As 120 got the same maximum heap size as next-fit I found it worth putting it in. Higher overhead values lead to faster execution time but a bigger heap.</p>
<p>I don’t have my numbers at hand right now. You’re probably right that they are relevant (to you and Damien at least) but I didn’t want to have a huge table at the end of the post.</p>
</blockquote>
<p>nbbb (24 March 2020 at 11 h 18 min):</p>
<blockquote>
<p>Higher values would allow us to see if best-fit can reproduce the performance characteristics of next-fit, for some value of the overhead.</p>
</blockquote>
<p>nbbb (24 March 2020 at 16 h 51 min):</p>
<blockquote>
<p>I just realized that 120 already has a heap as bit as next-fit — so best-fit can’t get as good as next-fit in this example, and higher values of the overhead are not quite as informative. Should have read more closely the first time.</p>
</blockquote>
<p>Thomas Blanc (24 March 2020 at 16 h 55 min):</p>
<blockquote>
<p>Sorry that it wasn’t as clear as it could be.</p>
<p>Note that opam and dose are in the best-case scenario of best-fit. Your own code would probably produce a different result and I encourage you to test it and communicate about it.</p>
</blockquote>
New version of TryOCaml in beta!https://ocamlpro.com/blog/2020_03_16_new_version_of_try_ocaml_in_beta2020-03-16T08:12:13Z2020-03-16T08:12:13Z
Louis Gesbert
We are happy to announce that our venerable "TryOCaml" service is being retired and replaced by a new, modern version based upon our work on Learn-OCaml. → Try it here ← The new interface provides an editor panel besides the familiar top-level, error and warning positions highlighting, the lates...<p><img src="/blog/assets/img/picture_new_tryocaml.jpeg" alt="" /></p>
<p>We are happy to announce that our venerable "TryOCaml" service is being retired and replaced by a new, modern version based upon our work on <a href="https://github.com/ocaml-sf/learn-ocaml">Learn-OCaml</a>.</p>
<p>→ <a href="https://try.ocamlpro.com">Try it here</a> ←</p>
<p>The new interface provides an editor panel besides the familiar top-level, error and warning positions highlighting, the latest OCaml release (4.10.0), local storage of your session, and more.</p>
<blockquote>
<p>The service is still in beta, so it would be helpful if you could tell us about any hiccups you may encounter <a href="https://discuss.ocaml.org/t/ann-try-ocaml-2-0-beta">on the Discuss thread</a>.</p>
</blockquote>
<p>Let's read the testimony of Sylvain Conchon about our new version of TryOCaml:</p>
<blockquote>
<p>“TryOCaml saved our lives in Paris Saclay in these times of social distancing. I teach functional programming with OCaml to my Y2 Bachelor’s Degree students. With the quarantine in place, we weren’t able to host the practical assignment in the machine room as usual, so we decided the students would do the exam at home. However, many of our students use Windows on which setting up OCaml is a hassle, or otherwise encountered problems while setting up the OCaml environment. We invited our students to use try-ocaml instead! Many have and the exam went really smoothly.”</p>
</blockquote>
Réunion annuelle du Club des utilisateurs d’Alt-Ergohttps://ocamlpro.com/blog/2020_03_03_reunion_annuelle_du_club_des_utilisateurs_dalt_ergo2020-03-03T08:12:13Z2020-03-03T08:12:13Z
Aurore Dromby
Alt-Ergo meeting Logo Alt-Ergo La deuxième réunion annuelle du Club des utilisateurs d’Alt-Ergo a eu lieu à la mi-février ! Notre réunion annuelle est l’endroit idéal pour passer en revue les besoins de chaque partenaire concernant Alt-Ergo. Cette année, nous avons eu le plaisir de recevo...<p><img src="/blog/assets/img/altergo-meeting.jpeg" alt="Alt-Ergo meeting" />
<img src="/assets/img/logo_altergo.png" alt="Logo Alt-Ergo" /></p>
<p>La deuxième réunion annuelle du Club des utilisateurs d’Alt-Ergo a eu lieu à la mi-février ! Notre réunion annuelle est l’endroit idéal pour passer en revue les besoins de chaque partenaire concernant Alt-Ergo. Cette année, nous avons eu le plaisir de recevoir nos partenaires pour discuter de la feuille de route concernant les développements et les améliorations futures d’Alt-Ergo.</p>
<blockquote>
<p>Alt-Ergo est un démonstrateur automatique de formules mathématiques, créé au <a href="https://www.lri.fr/">LRI</a> et développé par OCamlPro depuis 2013. Pour en savoir plus ou rejoindre le Club, visitez le site <a href="https://alt-ergo.ocamlpro.com">https://alt-ergo.ocamlpro.com/</a>.</p>
</blockquote>
<p>Notre Club a plusieurs objectifs, le premier étant de garantir la pérennité d’Alt-Ergo en favorisant la collaboration entre les membres du Club et en renforçant la collaboration avec les communautés de méthodes formelles telles que Why3. L’une de nos priorités est d’augmenter le nombre d’utilisateurs de notre outil en l’étendant à de nouveaux domaines tels que le Model Checking, la participation à des compétitions internationales étant également un moyen de gagner en visibilité. Enfin, le dernier objectif du Club est de trouver de nouveaux projets ou contrats pour le développement de fonctionnalités à long terme.</p>
<p>Nous remercions tous nos membres pour leur soutien et souhaitons la bienvenue à Mitsubishi Electric R&D Centre Europe qui rejoint AdaCore et le CEA List en tant que membre du Club cette année. Nous souhaitons également mettre en lumière l’équipe de développement <a href="http://why3.lri.fr/">Why3</a> avec laquelle nous travaillons pour améliorer nos outils.</p>
<p>Nos membres sont particulièrement intéressés par les points suivants :</p>
<p>– Une meilleure génération de modèles et de contre-exemples</p>
<p>– L’ajout de la théorie des séquences</p>
<p>– L’amélioration du support de l’arithmétique non linéaire dans Alt-Ergo</p>
<p>Ces fonctionnalités sont maintenant nos principales priorités. Pour suivre nos avancement et les nouveautés, n’hésitez pas à lire nos <a href="category/formal_methods">articles</a> sur ce blog.</p>
2019 chez OCamlPro https://ocamlpro.com/blog/2020_02_05_fr_2019_chez_ocamlpro2020-02-05T08:12:13Z2020-02-05T08:12:13Z
OCamlPro
2019 at OCamlPro OCamlPro a pour ambition d’aider les industriels dans leur adoption du langage OCaml et des méthodes formelles. L’entreprise est passée d’1 à 21 personnes et est restée fidèle à cet objectif. L’année 2019 chez OCamlPro a été très animée, et le nombre de réalisati...<p><img src="/blog/assets/img/logo_ocp_2019.png" alt="2019 at OCamlPro" /></p>
<p>OCamlPro a pour ambition d’aider les industriels dans leur adoption du langage OCaml et des méthodes formelles. L’entreprise est passée d’1 à 21 personnes et est restée fidèle à cet objectif. L’année 2019 chez OCamlPro a été très animée, et le nombre de réalisations impressionnant,
d’abord dans le monde OCaml (flambda2 & optimisations du compilateur, opam 2, notre interface Rust pour memprof, des outils comme
tryOCaml, ocp-indent, et le soutien à la OCaml Software Foundation), et dans le monde des méthodes formelles (nouvelles versions de notre
solveur SMT Alt-Ergo, lancement du Club des utilisateurs Alt-Ergo,lancement du langage Love, etc.)</p>
<p><a href="/2020/02/04/2019-at-ocamlpro/">Lire la suite (en anglais)</a></p>
2019 at OCamlProhttps://ocamlpro.com/blog/2020_02_04_2019_at_ocamlpro2020-02-04T08:12:13Z2020-02-04T08:12:13Z
Muriel
OCamlPro
2019 at OCamlPro OCamlPro was created to help OCaml and formal methods spread into the industry. We grew from 1 to 21 engineers, still strongly sharing this ambitious goal! The year 2019 at OCamlPro was very lively, with fantastic accomplishments all along! Let's quickly review the past years' works...<p><img src="/blog/assets/img/logo_ocp_2019.png" alt="2019 at OCamlPro" /></p>
<p>OCamlPro was created to help OCaml and formal methods spread into the industry. We grew from 1 to 21 engineers, still strongly sharing this ambitious goal! The year 2019 at OCamlPro was very lively, with fantastic accomplishments all along!</p>
<p>Let's quickly review the past years' works, first in the world of <a href="#ocaml">OCaml</a> (<a href="#compilation">flambda2</a> & compiler optimisations, <a href="#opam">opam</a> 2, our <a href="#rust">Rust-based</a> UI for <a href="#memthol">memprof</a>, tools like tryOCaml, ocp-indent, and supporting the <a href="#ocsf">OCaml Software Foundation</a>), then in the world of <a href="#formalmethods">formal methods</a> (new versions of our SMT Solver <a href="#altergo">Alt-Ergo</a>, launch of the <a href="#altergoclub">Alt-Ergo Users' Club</a>, the <a href="#love">Love language</a>, etc.).</p>
<h2>In the World of OCaml</h2>
<p><img src="/blog/assets/img/logo_ocaml.png" alt="ocaml" /></p>
<h3>Flambda/Compilation Team</h3>
<p><em>Work by Pierre Chambart, Vincent Laviron, Guillaume Bury and Pierrick Couderc</em></p>
<p>Pierre and Vincent's considerable work on Flambda 2 (the optimizing intermediate representation of the OCaml compiler – on which inlining occurs), in close cooperation with Jane Street (Mark Shinwell, Leo White and their team) aims at overcoming some of flambda's limitations. We have continued our work on making OCaml programs always faster: internal types are clearer, more concise, and possible control flow transformations are more flexible. Overall a precious outcome for industrial users. In 2019, the major breakthrough was to go from the initial prototype to a complete compiler, which allowed us to compile simple examples first and then to bootstrap it.</p>
<p>On the OCaml compiler side, we also worked with Leo on two new features: functorized compilation units and functorized packs, and recursive packs. The former will allow any developer to implement <code>.ml</code> files as if they were functors and not simply modules, and more importantly generate packs that are real functors. As such, this allows to split big functors into several files or to parameterize libraries on other modules. The latter allows two distinct usages: recursive interfaces, to implement recursive types into distinct <code>.mlis</code> as long as they do not need any implementation; and recursive packs, whose components are typed and compiled as recursive modules.</p>
<ul>
<li>These new features are described on the new <a href="https://github.com/ocaml/RFCs/pull/11">RFC repository</a> for OCaml (a <a href="https://github.com/ocaml/ocaml/issues/5283">similar idea</a> was suggested and implemented in 2011 by Fabrice Le Fessant).
</li>
<li>The implementation is available on GitHub for both <a href="https://github.com/OCamlPro-Couderc/ocaml/tree/functorized-packs">functorized packs</a> and <a href="https://github.com/OCamlPro-Couderc/ocaml/tree/recursive-units+pack-cleanup">recursive packs</a>. Be aware that both are based on an old version of OCaml for now, but should be in sync with the current trunk in the near future.
</li>
<li>See also Vincent's <a href="/blog/2019_08_30_ocamlpros_compiler_team_work_update">OCamlPro’s compiler team work update</a> of August 2019.
</li>
</ul>
<p><em>This work is allowed thanks to Jane Street's funding.</em></p>
<h3>Work on a formalized type system for OCaml</h3>
<p><em>Work of Pierrick Couderc</em></p>
<p>At the end of 2018, Pierrick defended his PhD on "<a href="https://pastel.archives-ouvertes.fr/tel-02100717/">Checking type inference results of the OCaml language</a>", leading to a formalized type systems and semantics for a large subset of OCaml, or at least its unique typed intermediate language: the Typedtree. This work led us to work on new aspects of the OCaml compiler as recursive and functorized packs described earlier, and we hope this proves useful in the future for the evolution of the language.</p>
<h3>The OPAM package manager</h3>
<p><em>Work of Raja Boujbel and Louis Gesbert</em></p>
<p><img src="/blog/assets/img/logo_opam_300_261.png" alt="opam" /></p>
<p><a href="https://opam.ocaml.org">OPAM</a> is maintained and developed at OCamlPro by Louis and Raja. Thanks to their thorough efforts the opam 2.1 first release candidate is soon to be published!</p>
<p>Back in 2018, the long-awaited opam 2.0 version was finally released. It embedded many changes, in opam itself as well as for the community. The opam file format was redefined to simplify and add new features. With the close collaboration of OCamlLabs and opam repository maintainers, we were able to manage a smooth transition of the repository and whole ecosystem from opam 1.2 format to the new – and richer – opam 2.0 format. Other emblematic features are e.g. for practically integrated mccs solver, sandboxing builds, for security issues (we care about your filesystem!), for usability reworking of the pin' command, etc.</p>
<p>While the 2.1.0 version is in preparation, the 2.0.0 version is still updated with minor releases to fix issues. The lastest 2.0.6 release is fresh from January.</p>
<p>In the meantime, we continued to improve opam by integrating some opam plugins (opam lock, opam depext), recursively discover opam files in the file tree when pinning, new definition of a switch compiler, the possibility to use z3 backend instead of mccs, etc.</p>
<p>All these new features – among others – will be integrated in the 2.1.0 release, that is betaplanned for February. The best is yet to come!</p>
<ul>
<li>More details: on <a href="https://opam.ocaml.org">https://opam.ocaml.org</a>
</li>
<li>Releases on Releases on <a href="https://github.com/ocaml/opam/releases">https://github.com/ocaml/opam/releases</a> & <a href="https://opam.ocaml.org/blog/opam-2-0-6/">our blog</a>
</li>
</ul>
<p><em>This work is allowed thanks to Jane Street's funding.</em></p>
<h3>Encouraging OCaml adoption</h3>
<h4>OCaml Expert trainings for professional programmers</h4>
<p>We proposed in 2019 some <a href="/course_ocaml_expert">OCaml expert training</a> specially designed for developers who want to use advanced features and master all the open-source tools and libraries of OCaml.</p>
<blockquote>
<p>The "Expert" OCaml course is for already experienced OCaml programmers to better understand advanced type system possibilities (objects, GADTs), discover GC internals, write "compiler-optimizable" code. These sessions are also an opportunity to come discuss with our OPAM & Flambda lead developers and core contributors in Paris.</p>
</blockquote>
<p>Next session: 3-4 March 2020, Paris <a href="https://www.ocamlpro.com/pre-inscription-a-une-session-de-formation-inter-entreprises/">(registration)</a></p>
<h4>Our cheat-sheets on OCaml, the stdlib and opam</h4>
<p><em>Work of Thomas Blanc, Raja Boujbel and Louis Gesbert</em></p>
<p>Thomas announced the release of our up-to-date cheat-sheets for the <a href="/blog/2019_09_13_updated_cheat_sheets_language_stdlib_2">OCaml language, standard library</a> and <a href="https://ocamlpro.github.io/ocaml-cheat-sheets/ocaml-opam.pdf">opam</a>. Our original cheat-sheets were dating back to 2011. This was an opportunity to update them after the <a href="/blog/2019_09_13_updated_cheat_sheets_language_stdlib_2">many changes</a> in the language, library and ecosystem overall.</p>
<blockquote>
<p>Cheat-sheets are helpful to refer to, as an overview of the documentation when you are programming, especially when you’re starting in a new language. They are meant to be printed and pinned on your wall, or to be kept in handy on a spare screen. <em>They come in handy when your <a href="https://rubberduckdebugging.com/">rubber duck</a> is rubbish at debugging your code!</em></p>
</blockquote>
<p>More details on <a href="/blog/2019_09_13_updated_cheat_sheets_language_stdlib_2">Thomas' blog post</a></p>
<h4>Open Source Tooling and Web IDEs</h4>
<p>And let's not forget the other tools we develop and maintain! We have tools for education such as our interactive editor OCaml-top and <a href="https://try.ocamlpro.com/new.html">Try-OCaml</a> (from the previous work on the learn-OCaml platform for the OCaml Fun MOOC) which you can use to code in your browser. Developers will appreciate tools like our indentation tool ocp-indent, and ocp-index which gives you easy access to the interface information of installed OCaml libraries for editors like Emacs and Vim.</p>
<h3>Supporting the OCaml Software Foundation</h3>
<p>OCamlPro was proud to be one of the first supporters of the new Inria's <a href="https://ocaml-sf.org/">OCaml Software Foundation.</a> We keep committed to the adoption of OCaml as an industrial language:</p>
<blockquote>
<p>"[…] As a long-standing supporter of the OCaml language, we have always been committed to helping spread OCaml's adoption and increase the accessibility of OCaml to beginners and students. […] We value close and friendly collaboration with the main actors of the OCaml community, and are proud to be contributing to the OCaml language and tooling." (August 2019, Advisory Board of the OCSF, ICFP Berlin)</p>
</blockquote>
<p>More information on the <a href="https://ocaml-sf.org/">OCaml Software Foundation</a></p>
<h2>In the World of Formal Methods</h2>
<p><em>By Mohamed Iguernlala, Albin Coquereau, Guillaume Bury</em></p>
<p>In 2018, we welcomed five new engineers with a background in formal methods. They consolidate the department of formal methods at OCamlPro, in particular help develop and maintain our SMT solver Alt-Ergo.</p>
<h3>Release of Alt-Ergo 2.3.0, and version 2.0.0 (free)</h3>
<p>After the release of <a href="/blog/2018_04_23_release_of_alt_ergo_2_2_0">Alt-Ergo 2.2.0</a> (with a new front-end that supports the SMT-LIB 2 language, extended prenex polymorphism, implemented as a standalone library) came the version 2.3.0 in 2019 with new features : dune support, ADT / algebraic datatypes, improvement of the if-then-else and let-in support, improvement of the data types.</p>
<ul>
<li>More information on the <a href="https://alt-ergo.ocamlpro.com/">Alt-Ergo SMT Solver</a>
</li>
<li>Albin Coquereau defended his PhD thesis in Decembre 2019 "Improving performance of the SMT solver Alt-Ergo with a better integration of efficient SAT solver"
</li>
<li>We participated in the SMT-COMP 2019 during the 22nd SAT conference. The results of the competition are detailed <a href="/blog/2019_07_09_alt_ergo_participation_to_the_smt_comp_2019">here.</a>
</li>
</ul>
<h3>The launch of the Alt-Ergo Users' Club</h3>
<p>Getting closer to our users, gathering industrial and academic supporters, collecting their needs into the Alt-Ergo roadmap is key to Alt-Ergo's development and sustainability.</p>
<p>The <a href="https://alt-ergo.ocamlpro.com/#club">Alt-Ergo Users' Club</a> was officially launched beginning of 2019. The first yearly meeting took place in February 2019. We were happy to welcome our first members <a href="https://www.adacore.com">Adacore</a>, <a href="https://www-list.cea.fr/en/">CEA List</a>, <a href="https://trust-in-soft.com">Trust-In-Soft</a>, and now Mitsubishi MERCE.</p>
<p>More information on the <a href="https://alt-ergo.ocamlpro.com/#club">Alt-Ergo Users' Club</a></p>
<p><img src="/blog/assets/img/logo_love_couleur.png" alt="Love-language" /></p>
<h2>Harnessing our language-design expertise: Love</h2>
<p><em>Work by David Declerck & Steven de Oliveira</em></p>
<p>Following the launch of Dune network, the Love language for smart-contracts was born from the collaboration of OCamlPro and Origin Labs. This new language, whose syntax is inspired from OCaml and Liquidity, is an alternative to the Dune native smart contract language Michelson. Love is based on system-F, a type system requiring no type inference and allowing polymorphism. The language has successfully been integrated on the network and the first smart contracts are being written.</p>
<p><a href="https://medium.com/dune-network/love-a-new-smart-contract-language-for-the-dune-network-a217ab2255be">LOVE: a New Smart Contract Language for the Dune Network</a>
<a href="https://medium.com/dune-network/the-love-smart-contract-language-introduction-key-features-part-i-949d8a4e73c3">The Love Smart Contract Language: Introduction & Key Features — Part I</a></p>
<h2>Rust-related activities</h2>
<p>The OCaml & Rust combo <em>should</em> be a candidate for any ambitious software project!</p>
<ul>
<li>A Rust-based UI for memprof: we started in 2019 to work in collaboration with the memprof developer team on a Rust based UI for memprof. See Pierre and Albin's exposé at the <a href="https://jfla.inria.fr/jfla2020.html">JFLA2020</a>'s "Gardez votre mémoire fraiche avec Memthol" (Pierre Chambart , Albin Coquereau and Jacques-Henri Jourdan)
</li>
<li><a href="/course_rust_vocational_training">Rust training</a> : <em>Rust borrows heavily from functional programming languages to provide very expressive abstraction mechanisms. Because it is a systems language, these mechanisms are almost always zero-cost. For instance, polymorphic code has no runtime cost compared to a monomorphic version.This concern for efficiency also means that Rust lets developers keep a very high level of control and freedom for optimizations. Rust has no Garbage Collection or any form of runtime memory inspection to decide when to free, allocate or re-use memory. But because manual memory management is almost universally regarded as dangerous, or at least very difficult to maintain, the Rust compiler has a borrow-checker which is responsible for i) proving that the input program is memory-safe (and thread-safe), and ii) generating a safe and “optimal” allocation/deallocation strategy. All of this is done at compile-time.</em>
</li>
<li>Next sessions: April 20-24th 2020 <a href="https://www.ocamlpro.com/pre-inscription-a-une-session-de-formation-inter-entreprises/">(registration)</a>
</li>
</ul>
<h2>OCamlPro around the world</h2>
<p>OCamlPro's team members attended many events throughout the world:</p>
<ul>
<li><a href="https://icfp19.sigplan.org/">ICFP 2019</a> (Berlin)
</li>
<li>The <a href="https://dpt-info.u-strasbg.fr/~magaud/JFLA2019/lieu.html">JFLA’2019</a> (Les Rousses, Haut-Jura)
</li>
<li>The<a href="https://www.opensourcesummit.paris/"> POSS'2019 </a>(Paris)
</li>
<li><a href="https://retreat.mirage.io/">MirageOS Retreat</a> (Marrakech)
</li>
</ul>
<p>As a committed member of the OCaml ecosystem's animation, we've organized OCaml meetups too (see the famous <a href="https://www.meetup.com/fr-FR/ocaml-paris/">OUPS</a> meetups in Paris!).</p>
<p>Now let's jump into the new year 2020, with a team keeping expanding, and new projects ahead: keep posted!</p>
<h3>Past projects: blockchain-related achievements (2018-beginning of 2019)</h3>
<p>Many people ask us about what happened in 2018! That was an incredibly active year on blockchain-related achievements, and at that time we were hoping to attract clients that would be interested in our blockchain expertise.</p>
<p>But that is <a href="https://files.ocamlpro.com/Flyer_Blockchains_OSIS2017ok.pdf">history</a> now! Still interested? Check the <a href="https://www.origin-labs.com/">Origin Labs</a> team and their partner <a href="https://www.thegara.ge/">The Garage</a> on <a href="https://dune.network">Dune Network</a>!</p>
<p>For the <a href="/blog/2019_04_29_blockchains_at_ocamlpro_an_overview">record</a>:</p>
<ul>
<li>(April 2019) We had started Techelson: a testing framework for Michelson and Liquidity
</li>
<li>(Nov 2018) <a href="/blog/2018_11_21_an_introduction_to_tezos_rpcs_signing_operations">An Introduction to Tezos RPCs: Signing Operations</a> / <a href="/blog/2018_11_15_an-introduction_to_tezos_rpcs_a_basic_wallet">An Introduction to Tezos RPCs: a Basic Wallet</a> / <a href="/blog/2018_11_06_liquidity_tutorial_a_game_with_an_oracle_for_random_numbers">Liquidity Tutorial: A Game with an Oracle for Random Numbers</a> / <a href="/blog/2018_11_08_first_open_source_release_of_tzscan">First Open-Source Release of TzScan</a>
</li>
<li>(Oct 2018) <a href="/blog/2018_10_17_ocamlpros_tzscan_grant_proposal_accepted_by_the_tezos_foundation_joint_press_release">OCamlPro’s TZScan grant proposal accepted by the Tezos Foundation – joint press release</a>
</li>
<li>(Jul 2018) <a href="/blog/2018_07_20_new_updates_on_tzscan_2">OCamlPro’s Tezos block explorer TzScan’s last updates</a>
</li>
<li>(Feb 2018) <a href="/blog/2018_02_14_release_of_a_first_version_of_tzscan_io_a_tezos_block_explorer">Release of a first version of TzScan.io, a Tezos block explorer</a> / <a href="/blog/2018_11_06_liquidity_tutorial_a_game_with_an_oracle_for_random_numbers">OCamlPro’s Liquidity-lang demo at JFLA2018 – a smart-contract design language</a> . We were developing <a href="https://www.liquidity-lang.org/">Liquidity</a>, a high level smart contract language, human-readable, purely functional, statically-typed, which syntax was very close to the OCaml syntax.
</li>
<li>To garner interest and adoption, we also developed the online editor <a href="https://www.liquidity-lang.org/edit">Try Liquidity</a>. Smart-contract developers could design contracts interactively, directly in the browser, compile them to Michelson, run them and deploy them on the alphanet network of Tezos. Future plans included a full-fledged web-based IDE for Liquidity. Worth mentioning was a neat feature: decompiling a Michelson program back to its Liquidity version, whether it was generated from Liquidity code or not.
</li>
</ul>
opam 2.0.6 releasehttps://ocamlpro.com/blog/2020_01_16_opam_2.0.6_release2020-01-16T08:12:13Z2020-01-16T08:12:13Z
Raja Boujbel
Louis Gesbert
We are pleased to announce the minor release of opam 2.0.6. This new version contains some small backported fixes and build update: Don't remove git cache objects that may be used [#3831 @AltGr]
Don't include .gitattributes in index.tar.gz [#3873 @dra27]
Update FAQ uri [#3941 @dra27]
Lock: add warni...<p>We are pleased to announce the minor release of <a href="https://github.com/ocaml/opam/releases/tag/2.0.6">opam 2.0.6</a>.</p>
<p>This new version contains some small <a href="https://github.com/ocaml/opam/pull/3973">backported</a> fixes and build update:</p>
<ul>
<li>Don't remove git cache objects that may be used [<a href="https://github.com/ocaml/opam/pull/3831">#3831</a> <a href="https://github.com/AltGr">@AltGr</a>]
</li>
<li>Don't include .gitattributes in index.tar.gz [<a href="https://github.com/ocaml/opam/pull/3873">#3873</a> <a href="https://github.com/dra27">@dra27</a>]
</li>
<li>Update FAQ uri [<a href="https://github.com/ocaml/opam/pull/3941">#3941</a> <a href="https://github.com/dra27">@dra27</a>]
</li>
<li>Lock: add warning in case of missing locked file [<a href="https://github.com/ocaml/opam/pull/3939">#3939</a> <a href="https://github.com/rjbou">@rjbou</a>]
</li>
<li>Directory tracking: fix cached entries retrieving with precise
tracking [<a href="https://github.com/ocaml/opam/pull/4038">#4038</a> <a href="https://github.com/hannesm">@hannesm</a>]
</li>
<li>Build:
<ul>
<li>Add sanity checks [<a href="https://github.com/ocaml/opam/pull/3934">#3934</a> <a href="https://github.com/dra27">@dra27</a>]
</li>
<li>Build man pages using dune [<a href="https://github.com/ocaml/opam/issues/3902">#3902</a> ]
</li>
<li>Add patch and bunzip check for make cold [<a href="https://github.com/ocaml/opam/pull/4006">#4006</a> <a href="https://github.com/rjbou">@rjbou</a> - fix <a href="https://github.com/ocaml/opam/issues/3842">#3842</a>]
</li>
</ul>
</li>
<li>Shell:
<ul>
<li>fish: add colon for fish manpath [<a href="https://github.com/ocaml/opam/pull/3886">#3886</a> <a href="https://github.com/rjbou">@rjbou</a> - fix <a href="https://github.com/ocaml/opam/issues/3878">#3878</a>]
</li>
</ul>
</li>
<li>Sandbox:
<ul>
<li>Add dune cache as rw [<a href="https://github.com/ocaml/opam/pull/4019">#4019</a> <a href="https://github.com/rjbou">@rjbou</a> - fix <a href="https://github.com/ocaml/opam/issues/4012">#4012</a>]
</li>
<li>Do not fail if $HOME/.ccache is missing [<a href="https://github.com/ocaml/opam/pull/3957">#3957</a> <a href="https://github.com/mseri">@mseri</a>]
</li>
</ul>
</li>
<li>opam-devel file: avoid copying extraneous files in opam-devel example [<a href="https://github.com/ocaml/opam/pull/3999">#3999</a> <a href="https://github.com/maroneze">@maroneze</a>]
</li>
</ul>
<p>As <strong>sandbox scripts</strong> have been updated, don't forget to run <code>opam init --reinit -ni</code> to update yours.</p>
<blockquote>
<p>Note: To homogenise macOS name on system detection, we decided to keep <code>macos</code>, and convert <code>darwin</code> to <code>macos</code> in opam. For the moment, to not break jobs & CIs, we keep uploading <code>darwin</code> & <code>macos</code> binaries, but from the 2.1.0 release, only <code>macos</code> ones will be kept.</p>
</blockquote>
<hr />
<p>Installation instructions (unchanged):</p>
<ol>
<li>From binaries: run
</li>
</ol>
<pre><code class="language-sheel-session">bash -c "sh <(curl -fsSL https://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh) --version 2.0.6"
</code></pre>
<p>or download manually from <a href="https://github.com/ocaml/opam/releases/tag/2.0.6">the Github "Releases" page</a> to your PATH. In this case, don't forget to run <code>opam init --reinit -ni</code> to enable sandboxing if you had version 2.0.0~rc manually installed or to update you sandbox script.</p>
<ol start="2">
<li>From source, using opam:
</li>
</ol>
<pre><code class="language-shell-session">opam update; opam install opam-devel
</code></pre>
<p>(then copy the opam binary to your PATH as explained, and don't forget to run <code>opam init --reinit -ni</code> to enable sandboxing if you had version 2.0.0~rc manually installed or to update you sandbox script)</p>
<ol start="3">
<li>From source, manually: see the instructions in the <a href="https://github.com/ocaml/opam/tree/2.0.6#compiling-this-repo">README</a>.
</li>
</ol>
<p>We hope you enjoy this new minor version, and remain open to <a href="https://github.com/ocaml/opam/issues">bug reports</a> and <a href="https://github.com/ocaml/opam/issues">suggestions</a>.</p>
<blockquote>
<p>NOTE: this article is cross-posted on <a href="https://opam.ocaml.org/blog/">opam.ocaml.org</a> and <a href="/blog">ocamlpro.com</a>.</p>
</blockquote>
The Opam 2.0 cheatsheet, with a new theme!https://ocamlpro.com/blog/2020_01_10_opam_2.0_cheat_sheet2020-01-10T08:12:13Z2020-01-10T08:12:13Z
Thomas Blanc
The Opam 2.0 cheatsheet, with a new theme! Earlier, we dusted-off our Language and Stdlib cheatsheets, for teachers and students. With more time, we managed to design an Opam 2.0 cheat-sheet we are proud of. It is organized into two pages: The everyday average Opam use:
Installation, Configuration, ...<p><a href="/blog/2020_01_10_opam_2.0_cheat_sheet"><img src="/blog/assets/img/logo_opam_blue.png" alt="The Opam 2.0 cheatsheet, with a new theme!" /></a></p>
<p><a href="/blog/2019_09_13_updated_cheat_sheets_language_stdlib_2">Earlier</a>, we dusted-off our Language and Stdlib cheatsheets, for teachers and students. With more time, we managed to design an Opam 2.0 cheat-sheet we are proud of. It is organized into two pages:</p>
<ul>
<li>The everyday average Opam use:
<ul>
<li>Installation, Configuration, Switches, Allowed URL formats, Packages, Exploring, Package pinning, Working with local pins, Sharing a dev setup, Configuring remotes.
</li>
</ul>
</li>
<li>Peculiar advanced use cases (opam-managed project, publishing, repository maintenance, etc.):
<ul>
<li>Package definition files, Some optional fields, Expressions, External dependencies, Publishing, Repository administration.
</li>
</ul>
</li>
</ul>
<p>Moreover, with the help of listings, we tried the use of colors for better readability. And we left some blank space for your own peculiar commands. Two versions are available (PDF):</p>
<ul>
<li>The Opam cheatsheet in <a href="https://ocamlpro.github.io/ocaml-cheat-sheets/ocaml-opam-bw.pdf">black & white</a>
</li>
<li>The Opam cheatsheet in <a href="https://ocamlpro.github.io/ocaml-cheat-sheets/ocaml-opam.pdf">colour</a>.
</li>
</ul>
<p>In any case do not hesitate to send us your suggestions on <a href="https://github.com/OCamlPro/ocaml-cheat-sheets">github</a>:</p>
<ul>
<li>Louis and Raja, the lead Opam developers, designed this cheatsheet so as to shed light on some important features (some I even discovered even though I speak daily with them!). If a command <em>you</em> find useful is not mentioned, let us know and we’ll add it. Feel free to ask for clarification and/or expansion of the manual!
</li>
</ul>
<p>Happy hacking!</p>
<blockquote>
<p>Note: If you come to one of our <a href="https://training.ocamlpro.com/">training sessions</a>, you’ll get a free cheatsheet! Isn’t that a bargain?</p>
</blockquote>
Des nouvelles de la part de l'équipe compilateur d'OCamlProhttps://ocamlpro.com/blog/2019_09_30_fr_travaux_sur_le_compilateur_ocaml_dernieres_nouvelles2019-09-30T08:12:13Z2019-09-30T08:12:13Z
Vincent Laviron
Nous sommes heureux de présenter certains travaux en cours sur le compilateur OCaml, travaux menés en étroite collaboration avec notre partenaire et client Janestreet. Un travail conséquent a été fait pour aboutir à un nouveau framework d’optimisation du compilateur, appelé Flambda2, dont ...<p><img src="/blog/assets/img/picture_cpu_compiler.jpeg" alt="" /></p>
<p>Nous sommes heureux de présenter certains travaux en cours sur le compilateur OCaml, travaux menés en étroite collaboration avec notre partenaire et client Janestreet.</p>
<p>Un travail conséquent a été fait pour aboutir à un nouveau framework d’optimisation du compilateur, appelé Flambda2, dont nous espérons qu’il corrigera certains défauts apparus dans Flambda. En parallèle, l’équipe a mené à bien certaines améliorations immédiates sur Flambda, ainsi que des modifications du compilateur qui seront utiles pour Flambda2.</p>
<p>Voir (en anglais) : <a href="/2019/08/30/ocamlpros-compiler-team-work-update/">OCamlPro’s compiler team work update</a></p>
Formations OCaml par OCamlPro : 5-6 et 7-8 novembre 2019https://ocamlpro.com/blog/2019_09_26_fr_formations_ocaml_par_ocamlpro_5_6_et_7_8_novembre_20192019-09-26T08:12:13Z2019-09-26T08:12:13Z
OCamlPro
OCamlPro lance un cycle de formations régulières à OCaml, en français, dans ses locaux parisiens (métro Alésia). La première session aura lieu début novembre 2019, avec 2 formations: Formation débutant : passer à OCaml (5-6 novembre)
Formation expert : approfondir sa maîtrise du langage (...<p><img src="/blog/assets/img/trainings_2019.png" alt="" /></p>
<p>OCamlPro lance un cycle de formations régulières à OCaml, en français, dans ses locaux parisiens (métro Alésia). La première session aura lieu début novembre 2019, avec 2 formations:</p>
<ul>
<li>Formation débutant : <a href="/formation-passer-a-ocaml/">passer à OCaml</a> (5-6 novembre)
</li>
<li>Formation expert : <a href="/formation-expert-ocaml/">approfondir sa maîtrise du langage</a> (7-8 novembre).
</li>
</ul>
<p>La formation expert sera l’occasion pour des programmeurs OCaml ayant
déjà une certaine expérience de mieux comprendre les possibilités
avancées du typage (objets, GADTs), de découvrir en détail le
fonctionnement du GC et d’écrire du code optimisable par le compilateur.</p>
<p>Ces formations sont aussi une occasion de venir discuter avec les
lead développeurs et contributeurs d’OPAM et Flambda chez OCamlPro.</p>
<blockquote>
<p>Des formations en anglais peuvent aussi être organisées sur demande à contact@ocamlpro.com</p>
</blockquote>
OCaml expert and beginner training by OCamlPro (in French): Nov. 5-6 & 7-8https://ocamlpro.com/blog/2019_09_25_ocaml_expert_and_beginner_training_by_ocamlpro_in_french_nov_5_6_7_82019-09-25T08:12:13Z2019-09-25T08:12:13Z
OCamlPro
In our endeavour to encourage professional programmers to understand and use OCaml, OCamlPro will be giving two training sessions, in French, in our Paris offices: OCaml Beginner course for professional programmers (5-6 Nov)
OCaml Expertise (7-8 Nov). The "Expert" OCaml course is for already experie...<p><img src="/blog/assets/img/trainings_2019.png" alt="" /></p>
<p>In our endeavour to encourage professional programmers to understand and use OCaml, OCamlPro will be giving two training sessions, in French, in our Paris offices:</p>
<ul>
<li><a href="https://ocamlpro.com/course-ocaml-development/">OCaml Beginner course</a> for professional programmers (5-6 Nov)
</li>
<li><a href="https://ocamlpro.com/course-ocaml-expert/">OCaml Expertise</a> (7-8 Nov).
</li>
</ul>
<p>The "Expert" OCaml course is for already experienced OCaml programmers to better understand advanced type system possibilities (objects, GADTs), discover GC internals, write "compiler-optimizable" code.</p>
<p>These sessions are also an opportunity to come discuss with OCamlPro's OPAM & Flambda lead developers and core contributors in Paris.</p>
<blockquote>
<p>Training in English can also be organized, on-demand.</p>
</blockquote>
<p>Register link: http://ocamlpro.com/forms/preinscriptions-formation-ocaml/</p>
<blockquote>
<p><em>This complements the excellent <a href="https://www.fun-mooc.fr/courses/course-v1:parisdiderot+56002+session04/about">OCaml MOOC</a> from Université Paris-Diderot and the <a href="https://ocaml.foundation/learn-ocaml">learn-OCaml platform</a> of the OCaml Software Foundation.</em></p>
</blockquote>
A look back on OCaml since 2011https://ocamlpro.com/blog/2019_09_20_look_back_ocaml_since_20112019-09-20T08:12:13Z2019-09-20T08:12:13Z
Thomas Blanc
A look back on OCaml since 2011 As you already know if you’ve read our last blogpost, we have updated our OCaml cheat sheets starting with the language and stdlib ones. We know some of you have students to initiate in September and we wanted these sheets to be ready for the start of the school yea...<p><a href="/blog/2019_09_20_look_back_ocaml_since_2011"><img src="/blog/assets/img/ocaml-2011-e1600870731841.jpeg" alt="A look back on OCaml since 2011" /></a></p>
<p>As you already know if you’ve read <a href="/blog/2019_09_13_updated_cheat_sheets_language_stdlib_2">our last blogpost</a>, we have updated our OCaml cheat sheets starting with the language and stdlib ones. We know some of you have students to initiate in September and we wanted these sheets to be ready for the start of the school year! We’re working on more sheets for OCaml tools like opam or Dune and important libraries such as ~~Obj~~ Lwt or Core. Keep an eye on our blog or the <a href="https://github.com/OCamlPro/ocaml-cheat-sheets">repo on GitHub</a> to follow all the updates.</p>
<p>Going through the documentation was a journey to the past: we have looked back on 8 years of evolution of the OCaml language and library. New feature after new feature, OCaml has seen many changes. Needless to say, upgrading our cheat sheets to OCaml 4.08.1 was a trip down memory lane. We wanted to share our throwback experience with you!</p>
<h2>2011</h2>
<p>Fabrice Le Fessant first published our cheat sheets in 2011, the year OCamlPro was created! At the time, OCaml was in its 3.12 version and just <a href="https://inbox.ocaml.org/caml-list/E49008DC-30C0-4B22-9939-85827134C8A6@inria.fr/">got its current name</a> agreed upon. <a href="https://caml.inria.fr/pub/docs/manual-ocaml/manual028.html">First-class modules</a> were the new big thing, Camlp4 and Camlp5 were battling for the control of the syntax extension world and Godi and Oasis were the packaging rage.</p>
<h2>2012</h2>
<p>Right after 3.12 came the switch to OCaml 4.00 which brought a major change: <a href="https://caml.inria.fr/pub/docs/manual-ocaml/manual033.html">GADTs</a> (generalized algebraic data types). Most of OCaml’s developers don’t use their almighty typing power, but the possibilities they provide are really helpful in some cases, most notably the format overhaul. They’re also a fun way to troll a beginner asking how to circumvent the typing system on Stack Overflow. Since most of us might lose track of their exact syntax, GADTs deserve their place in the updated sheet (if you happen to be OCamlPro’s CTO, <em>of course</em> the writer of this blogpost remembers how to use GADTs at all times).</p>
<p>On the standard library side, the big change was the switch of <code>Hashtbl</code> to Murmur 3 and the support for seeded randomization<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2012-0839">.</a></p>
<h2>2013</h2>
<p>With OCaml 4.01 came <a href="https://github.com/ocaml/ocaml/issues/5759">constructor disambiguation</a>, but there isn’t really a way to add this to the sheet. This feature allows you to avoid misguided usage of polymorphic variants, but that’s a matter of personal taste (there’s a well-known rule that if you refresh the comments section enough times, someone —usually called Daniel— will appear to explain polymorphic variants’ superiority to you). <code>-ppx</code> rewriters were introduced in this version as well.</p>
<p>The standard library got a few new functions. Notably, <code>Printexc.get_callstack</code> for stack inspection, the optimized application operators <code>|></code> and <code>@@</code> and <code>Format.asprintf</code>.</p>
<h2>2014</h2>
<p><em>Gabriel Scherer, on the Caml-list, end of January:</em></p>
<blockquote>
<p>TL;DR: During the six next months, we will follow pull requests (PR) posted on the github mirror of the OCaml distribution, as an alternative to the mantis bugtracker. This experiment hopes to attract more people to participate in the extremely helpful and surprisingly rewarding activity of patch reviews.</p>
</blockquote>
<p>Can you guess which change to the cheat-sheets came with 4.02? It’s a universally-loved language feature added in 2014. Still don’t know? It is <em>exceptional</em>! Got it?</p>
<p>Drum roll… it is the <code>match with exception</code> <a href="https://caml.inria.fr/pub/docs/manual-ocaml/patterns.html#sec131">construction</a>! It made our codes simpler, clearer and in some cases more efficient. A message to people who want to improve the language: please aim for that.</p>
<p>This version also added the <code>{quoted|foo|quoted}</code> <a href="https://caml.inria.fr/pub/docs/manual-ocaml/lex.html#string-literal">syntax</a> (which broke comments), generative functors, attributes and <a href="https://caml.inria.fr/pub/docs/manual-ocaml/manual036.html">extension nodes</a>, extensible data types, module aliases and, of course, immutable strings (which was optional at the time). Immutable strings is the one feature that prompted us to <em>remove</em> a line from the cheat sheets. More space is good. Camlp4 and Labltk moved out of the distribution.</p>
<p>In consequence of immutable strings, <code>Bytes</code> and <code>BytesLabel</code> were added to the library. For the great pleasure of optimization addicts, <code>raise_notrace</code> popped up. Under the hood, the <code>format</code> type was re-implemented using GADTs.</p>
<h2>2015</h2>
<p>This release was so big that 4.02.2 feels like a release in itself, with the adding of <code>nonrec</code> and <code>#...</code> operators.</p>
<p>The standard library was spared by this bug-fix themed release. Note that this is the last comparatively slow year of OCaml as the transition to GitHub would soon make features multiply, as hindsight teaches us.</p>
<h2>2016</h2>
<p>Speaking of a major release, we’re up to OCaml 4.03! It introduced <a href="https://caml.inria.fr/pub/docs/manual-ocaml/manual040.html">inline records</a>, a GADT exhaustiveness check on steroids (with <code>-> .</code> to denote unreachability) and standard attributes like <code>warning</code>, <code>inlined</code>, <code>unboxed</code> or <code>immediate</code>. Colors appeared in the compiler and last but not least, it was the dawn of a new option called <a href="http://ocamlpro.com/tag/flambda2-en/">Flambda</a>.</p>
<p>The library saw a lot of useful new functions coming in: lots of new iterators for <code>Array</code>, an <code>equal</code> function in most basic type modules, <code>Uchar</code>, the <code>*_ascii</code> alternatives and, of course, <code>Ephemeron</code>.</p>
<p>4.04 was much more restrained, but it was the second release in a single year. Local opening of module with the <code>M.{}</code> syntax was added along with the <code>let exception ...</code> in construct. <code>String.split_on_char</code> was notably added to the stdlib which means we don’t have to rewrite it anymore.</p>
<h2>2017</h2>
<p>We now get to 4.05… which did not change the language. Not that the development team wasn’t busy, OCaml just got better without any change to the syntax.</p>
<p>On the library side however, much happened, with the adding of <code>*_opt</code> functions pretty much everywhere. If you’re using the OCaml compiler from <a href="https://packages.debian.org/sid/ocaml">Debian</a>, this is where you might think the story ends. You’d be wrong…</p>
<p>…because 4.06 added a lot! My own favorite feature from this release has to be user-defined <a href="https://caml.inria.fr/pub/docs/manual-ocaml/manual042.html">indexing operators</a>. This is also when <code>safe-string</code> became the default, giving worthwhile work to every late maintainer in the community. This release also added one awesome function in the standard library: <code>Map.update</code>.</p>
<h2>2018</h2>
<p>4.07 was aimed towards solidifying the language. It added empty variants and type-based selection of GADT constructors to the mix.</p>
<p>On the library side, one old and two new modules were added, with the integration of <code>Bigarray</code>, <code>Seq</code> and <code>Float</code>.</p>
<h2>2019</h2>
<p>And here we are with 4.08, in the present day! We can now put exceptions under or-patterns, which is the only language change from this release we propagated to the sheet. Time will tell if we need to add custom <a href="https://caml.inria.fr/pub/docs/manual-ocaml/manual046.html">binding operators</a> or <code>[@@alert]</code>. <code>Pervasives</code> is now deprecated in profit of <code>Stdlib</code> and new modules are popping up (<code>Int</code>, <code>Bool</code>, <code>Fun</code>, <code>Result</code>… did we miss one?) while <code>Sort</code> made its final deprecation warning.</p>
<p>We did not add 4.09 to this journey to the past, as this release is still solidly in the <em>now</em> at the time of this blogpost. Rest assured, we will see much more awesome features in OCaml in the future! In the meantime, we are working on updating more cheat sheets: keep posted!</p>
<h1>Comments</h1>
<p>Micheal Bacarella (23 September 2019 at 18 h 17 min):</p>
<blockquote>
<p>For a blog-post from a company called OCaml PRO this seems like a rather tone-deaf PR action.</p>
<p>I wanted to read this and get hyped but instead I’m disappointed and I continue to feel like a chump advocating for this language.</p>
<p>Why? Because this is a rather underwhelming summary of <em>8 years</em> of language activity. Perhaps you guys didn’t intend for this to hit the front of Hacker News, and maybe this stuff is really exciting to programming language PhDs, but I don’t see how the average business OCaml developer would relate to many of these changes at all. It makes OCaml (still!) seem like an out-of-touch academic language where the major complaints about the language are ignored (multicore, Windows support, programming-in-the-large, debugging) while ivory tower people fiddle with really nailing type-based selection in GADTs.</p>
<p>I expect INRIA not to care about the business community but aren’t you guys called OCaml PRO? I thought you <em>liked</em> money.</p>
<p>You clearly just intended this to be an interesting summary of changes to your cheatsheet but it’s turned into a PR release for the language and leaves normals with the continued impression that this language is a joke.</p>
</blockquote>
<p>Thomas Blanc (24 September 2019 at 14 h 57 min):</p>
<blockquote>
<p>Yes, latency can be frustrating even in the OCaml realm. Thanks for your comment, it is nice to see people caring about it and trying to remedy through contributions or comments.</p>
<p>Note that we only posted on discuss.ocaml.org expecting to get one or two comments. The reason for this post was that while updating the CS we were surprised to see how much the language had changed and decided to write about it.</p>
<p>You do raise some good points though. We did work on a full windows support back in the day. The project was discontinued because nobody was willing to buy it. We also worked on memory profiling for the debugging of memory leaks (before other alternatives existed). We did not maintain it because the project had no money input. I personally worked on compile-time detection of uncaught exception until the public funding of that project ran out. We also had a proposal for namespaces in the language that would have facilitated programming-in-the-large (no funding) and worked on multicore (funding for one man for one year).</p>
</blockquote>
Mise à jour des Cheat Sheets : OCaml Language et OCaml Standard Libraryhttps://ocamlpro.com/blog/2019_09_14_fr_mise_a_jour_des_cheat_sheets_ocaml_language_et_ocaml_standard_library2019-09-14T08:12:13Z2019-09-14T08:12:13Z
Thomas Blanc
Les mémentos (cheat-sheets) OCaml lang et OCaml stdlib partagés par OCamlPro en 2011 ont été mis à jour pour OCaml 4.08. Le langage OCaml
OCaml Standard Library Si vous souhaitez contribuer des améliorations: sources sur GitHub. En savoir plus : Updated Cheat Sheets: OCaml Language and OCaml S...<p>Les mémentos (cheat-sheets) OCaml lang et OCaml stdlib partagés par OCamlPro en 2011 ont été mis à jour pour OCaml 4.08.</p>
<ul>
<li><a href="https://ocamlpro.github.io/ocaml-cheat-sheets/ocaml-lang.pdf">Le langage OCaml</a>
</li>
<li><ul>
<li><a href="https://ocamlpro.github.io/ocaml-cheat-sheets/ocaml-stdlib.pdf">OCaml Standard Library</a>
</li>
</ul>
</li>
</ul>
<p>Si vous souhaitez contribuer des améliorations: <a href="https://github.com/OCamlPro/ocaml-cheat-sheets">sources sur GitHub</a>.</p>
<p>En savoir plus : <a href="/2019/09/13/updated-cheat-sheets-ocaml-language-and-ocaml-standard-library/">Updated Cheat Sheets: OCaml Language and OCaml Standard Library</a></p>
Updated Cheat Sheets: OCaml Language and OCaml Standard Libraryhttps://ocamlpro.com/blog/2019_09_13_updated_cheat_sheets_language_stdlib_22019-09-13T08:12:13Z2019-09-13T08:12:13Z
Thomas Blanc
In 2011, we shared several cheat sheets for OCaml. Cheat sheets are helpful to refer to, as an overview of the documentation when you are programming, especially when you’re starting in a new language. They are meant to be printed and pinned on your wall, or to be kept in handy on a spare screen. ...<p>In 2011, we shared several cheat sheets for OCaml. Cheat sheets are helpful to refer to, as an overview of the documentation when you are programming, especially when you’re starting in a new language. They are meant to be printed and pinned on your wall, or to be kept in handy on a spare screen. We hope they will help you out when your rubber duck is rubbish at debugging your code!</p>
<p>Since we first shared them, OCaml and its related tools have evolved. We decided to refresh them and started with the two most-used cheat sheets—our own contribution to the start of the school year!</p>
<p>Download the revised version:</p>
<ul>
<li><a href="http://ocamlpro.com/wp-content/uploads/2019/09/ocaml-lang.pdf">OCaml Language (lang)</a> (PDF)
</li>
<li><a href="http://ocamlpro.com/wp-content/uploads/2019/09/ocaml-stdlib.pdf">OCaml Standard Library (stdlib)</a> (PDF)
</li>
</ul>
<p>You can also find <a href="https://github.com/OCamlPro/ocaml-cheat-sheets">the sources on GitHub</a>. We welcome contributions, feel free to send patches if you see room for improvement! We’re working on other cheat sheets: keep an eye on our blog to see updates and brand new cheat sheets.</p>
<p>While we were updating them, we realized how much OCaml had evolved in the last eight years. We’ll tell you everything about our trip down memory lane very soon in another blogpost!</p>
OCamlPro’s compiler team work updatehttps://ocamlpro.com/blog/2019_08_30_ocamlpros_compiler_team_work_update2019-08-30T08:12:13Z2019-08-30T08:12:13Z
Vincent Laviron
The OCaml compiler team at OCamlPro is happy to present some of the work recently done jointly with JaneStreet's team. A lot of work has been done towards a new framework for optimizations in the compiler, called Flambda2, aiming at solving the shortcomings that became apparent in the Flambda optimi...<p><img src="/blog/assets/img/picture_cpu_compiler.jpeg" alt="" /></p>
<p>The OCaml compiler team at OCamlPro is happy to present some of the work recently done jointly with JaneStreet's team.</p>
<p>A lot of work has been done towards a new framework for optimizations in the compiler, called Flambda2, aiming at solving the shortcomings that became apparent in the Flambda optimization framework (see below for more details). While that work is in progress, the team also worked on some more short-term improvements, notably on the current Flambda optimization framework, as well as some compiler modifications that will benefit Flambda2.</p>
<blockquote>
<p><em>This work is funded by JaneStreet :D</em></p>
</blockquote>
<h3>Short-term improvements</h3>
<h4>Recursive values compilation</h4>
<p>OCaml supports quite a large range of recursive definitions. In addition to recursive (and mutually-recursive) functions, one can also define regular values recursively, as for the infinite list <code>let rec l = 0 :: l</code>.</p>
<p>Not all recursive constructions are allowed, of course. For instance, the definition <code>let rec x = x</code> is rejected because there is no way to actually build a value that would behave correctly.</p>
<p>The basic rule for deciding whether a definition is allowed or not is made under the assumption that recursive values (except for functions, mostly) are compiled by first allocating space in the heap for the recursive values, binding the recursively defined variables to the allocated (but not yet initialized) values. The defining expressions are then evaluated, yielding new values (that can contain references the non-initialized values). Finally, the fields of these new values are copied one-by-one into the corresponding fields of the initial values.</p>
<p>For this approach to work, some restrictions need to apply:</p>
<ul>
<li>the compiler needs to be able to compute the size of the values beforehand (these values must be allocated values, in order to avoid defining an integer recursively),
</li>
<li>and since during the evaluation of the defining expressions their fields are not valid, one cannot write any code that may read these fields, like pattern-matching on the value, or passing the value to some function (or storing it in a mutable field of some record).
</li>
</ul>
<p>All of those restrictions have recently been reworked and formalized based on work from Alban Reynaud during an internship at Inria, reviewed and completed by Gabriel Scherer and Jeremy Yallop.</p>
<p>Unfortunately, this work only covers checking whether the recursive definitions are allowed or not; actual compilation is done later in the compiler, in one place for bytecode and another for native code, and these pieces of code have not been linked with the new check so there have been a few cases where the check allowed code that wasn't actually compiled correctly.</p>
<p>Since we didn't want to deal with it directly in our new version of Flambda, we had started working on a patch to move the compilation of recursive values up in the compilation pipeline, before the split between bytecode and native code. After some amount of hacking (we discovered that compilation of classes creates recursive value bindings that would not pass the earlier recursive check…), we have a patch that is mostly ready for review and will soon start engaging with the rest of the compiler team with the aim of integrating it into the compiler.</p>
<h4>Separate compilation of recursive modules, compilation units as functors</h4>
<p>Some OCaml developers like to encapsulate each type definition in its own module, with an interface that can expose the needed types and functions, while abstracting away as much of the actual implementation as possible. It is then common to have each of these modules in its own file, to simplify management and avoid unseemly big files.</p>
<p>However, this breaks down when one needs to define several types that depend on each other. The usual solutions are either to use recursive modules, which have the drawback of requiring all the modules to be in the same compilation unit, leading to very big files (we have seen a real case of a more than 10,000-lines file), or make each module parametric in the other modules, translating them into functors, and then instantiate all the functors when building the outwards-facing interface.</p>
<p>To address these issues, we have been working on two main patches to improve the life of developers facing these problems.</p>
<p>The first one allows compiling several different files as mutually recursive modules, reusing the approach used to compile regular recursive modules. In practice, this will allow developers using recursive modules extensively to properly separate not only the different modules from each other, but also the implementation and interfaces into a <code>.ml</code> and <code>.mli</code> files. This would of course need some additional support from the different build tools, but we're confident we can get at least <code>dune</code> to support the feature.</p>
<p>The second one allows compiling a single compilation unit as a functor instead of a regular module. The arguments of the functor would be specified on the command line, their signature taken from their corresponding interface file. This can be useful not only to break recursive dependencies, like the previous patch (though in a different way), but also to help developers relying on multiple implementations of a same <code>.mli</code> interface functorize their code with minimal effort.</p>
<p>These two improvements will also benefit packs, whereas recursive compilation units could be packed in a single module and packs could be functorized themselves.</p>
<h4>Small improvements to Flambda</h4>
<p>We are still committed to maintain the Flambda part of the compiler. Few bugs have been found, so we concentrate our efforts on small features that either yield overall performance gains or allow naive code patterns to be compiled as efficiently as their equivalent but hand-optimized versions.</p>
<p>As an example, one optimization that we should be able to submit soon looks for cases where an immutable block is allocated but an immutable block with the same exact fields and tag already exists.</p>
<p>This can be demonstrated with the following example:</p>
<pre><code class="language-ocaml">let result_bind f = function
| Ok x -> f x
| Error e -> Error e
</code></pre>
<p>The usual way to avoid the extra allocation of <code>Error e</code> is to write the clause as <code>| (Error e) as r -> r</code>. With this new patch, the redundant allocation will be detected and removed automatically! This can be even more interesting with inlining:</p>
<pre><code class="language-ocaml">let my_f x =
if (* some condition *)
then Ok x
else (* something else *)
let _ =
(* ... *)
let r = result_bind my_f (* some argument *) in
(* ... *)
</code></pre>
<p>In this example, inlining <code>result_bind</code> then <code>my_f</code> can match the allocation <code>Ok x</code> in <code>my_f</code> with the pattern matching in <code>result_bind</code>. This removes an allocation that would be very hard to remove otherwise. We expect these patterns to occur quite often with some programming styles relying on a great deal of abstraction and small independent functions.</p>
<h3>Flambda 2.0</h3>
<p>We are building on the work done for Flambda and the experience of its users to develop Flambda 2.0, the next optimization framework.</p>
<p>Our goal is to build a framework for analyzing the costs and benefits of code transformations. The framework focuses on reducing the runtime cost of abstractions and removing as many short-lived allocations as possible.</p>
<p>The aim of Flambda 2.0 is roughly the same as the original Flambda. So why did we decide to write a new framework instead of patching the existing one? Several points led us to this decision.</p>
<ul>
<li>An invariant on the representation of closures that ensured that every closure had a unique identifier, which was convenient for a number of reasons, turned out to be quite expensive to maintain and prevented some optimizations.
</li>
<li>The internal representation of Flambda terms included too many different cases that were either redundant or not relevant to the optimizations we were interested in, making a lot of code more complicated than necessary.
</li>
<li>The ANF-like representation we used was not perfect. We wanted an easier way to do control flow optimizations, which led us to choose a CPS-like representation for Flambda 2.0.
</li>
<li>Finally, the original Flambda was thought of as an alternative to the closure conversion and inlining algorithms performed by the <code>Closure</code> module of the compiler, translating from the <code>Lambda</code> representation to <code>Clambda</code>. However, a number of optimizations (most importantly unboxing) are done during the next phase of compilation, <code>Cmmgen</code>, which translates to the <code>Cmm</code> representation. The original Flambda had trouble to estimate correctly which optimizations would trigger and what would their benefit be. It may be noted that correctly estimating benefit is a key in Flambda's algorithms, and we know of a number of cases where Flambda is not as good as it could be because it couldn't predict the unboxing opportunities that inlining would have allowed. Flambda 2.0 will go from <code>Lambda</code> to <code>Cmm</code>, and will handle all transformations done in both <code>Closure</code> and <code>Cmmgen</code> in a single framework.
</li>
</ul>
<p>These improvements are still very much a work in progress. We have not reached the point where other developers can try out the new framework on their codebases yet.</p>
<p>This does not mean there are no news to enjoy before our efforts show on the mainstream compiler! While working on Flambda 2.0, we did deploy a number of patches on the compiler both before and after the Flambda stage. We proposed all the changes independant enough to be proposed on their own. Some of these fixes have been merged already. Others are still under discussion and some, like the recursive values patch mentioned above, are still waiting for cleanup or documentation before submission.</p>
<h1>Comments</h1>
<p>Jon Harrop (30 August 2019 at 20 h 11 min):</p>
<blockquote>
<p>What is the status of multicore OCaml?</p>
</blockquote>
<p>Vincent Laviron (2 September 2019 at 16 h 22 min):</p>
<blockquote>
<p>OCamlPro is not working on multicore OCaml. It is still being worked on elsewhere, with efforts concentrated around OCaml Labs, but I don’t have more information than what is publicly available. All of the work we described here is not expected to interfere with multicore.</p>
</blockquote>
<p>Lindsay (25 September 2020 at 20 h 20 min):</p>
<blockquote>
<p>Thanks for your continued work on the compiler and tooling! Am curious if there is any news regarding the item “Separate compilation of recursive modules”.</p>
</blockquote>
Release d’opam 2.0.5https://ocamlpro.com/blog/2019_07_23_fr_release_dopam_2.0.52019-07-23T08:12:13Z2019-07-23T08:12:13Z
Raja Boujbel
Louis Gesbert
Nous sommes fiers d’annoncer la release (mineure) d’ opam 2.0.5. Cette nouvelle version contient des mises à jours de build et correctifs. Plus d’information...<p>Nous sommes fiers d’annoncer la release (mineure) d’ <a href="https://github.com/ocaml/opam/releases/tag/2.0.5">opam 2.0.5</a>. Cette nouvelle version contient des mises à jours de build et correctifs.</p>
<blockquote>
<p><a href="/2019/07/11/opam-2-0-5-release/">Plus d’information</a></p>
</blockquote>
opam 2.0.5 releasehttps://ocamlpro.com/blog/2019_07_11_opam_2.0.5_release2019-07-11T08:12:13Z2019-07-11T08:12:13Z
Raja Boujbel
Louis Gesbert
We are pleased to announce the minor release of opam 2.0.5. This new version contains build update and small fixes: Bump src_ext Dune to 1.6.3, allows compilation with OCaml 4.08.0. [#3887 @dra27]
Support Dune 1.7.0 and later [#3888 @dra27 - fix #3870]
Bump the ocaml_mccs lib-ext, to include latest ...<p>We are pleased to announce the minor release of <a href="https://github.com/ocaml/opam/releases/tag/2.0.5">opam 2.0.5</a>.</p>
<p>This new version contains build update and small fixes:</p>
<ul>
<li>Bump src_ext Dune to 1.6.3, allows compilation with OCaml 4.08.0. [<a href="https://github.com/ocaml/opam/pull/3887">#3887</a> <a href="https://github.com/dra27">@dra27</a>]
</li>
<li>Support Dune 1.7.0 and later [<a href="https://github.com/ocaml/opam/pull/3888">#3888</a> <a href="https://github.com/dra27">@dra27</a> - fix <a href="https://github.com/ocaml/opam/issues/3870">#3870</a>]
</li>
<li>Bump the ocaml_mccs lib-ext, to include latest changes [<a href="https://github.com/ocaml/opam/pull/3896">#3896</a> <a href="https://github.com/AltGr">@AltGr</a>]
</li>
<li>Fix cppo detection in configure [<a href="https://github.com/ocaml/opam/pull/3917">#3917</a> <a href="https://github.com/dra27">@dra27</a>]
</li>
<li>Read jobs variable from OpamStateConfig [<a href="https://github.com/ocaml/opam/pull/3916">#3916</a> <a href="https://github.com/dra27">@dra27</a>]
</li>
<li>Linting:
<ul>
<li>add check upstream option [<a href="https://github.com/ocaml/opam/pull/3758">#3758</a> <a href="https://github.com/rjbou">@rjbou</a>]
</li>
<li>add warning for with-test in run-test field [<a href="https://github.com/ocaml/opam/pull/3765">#3765</a>, <a href="https://github.com/ocaml/opam/pull/3860">#3860</a> <a href="https://github.com/rjbou">@rjbou</a>]
</li>
<li>fix misleading <code>doc</code> filter warning [<a href="https://github.com/ocaml/opam/pull/3871">#3871</a> <a href="https://github.com/rjbou">@rjbou</a>]
</li>
</ul>
</li>
<li>Fix typos [<a href="https://github.com/ocaml/opam/pull/3891">#3891</a> <a href="https://github.com/dra27">@dra27</a>, <a href="https://github.com/mehdid">@mehdid</a>]
</li>
</ul>
<blockquote>
<p>Note: To homogenise macOS name on system detection, we decided to keep <code>macos</code>, and convert <code>darwin</code> to <code>macos</code> in opam. For the moment, to not break jobs & CIs, we keep uploading <code>darwin</code> & <code>macos</code> binaries, but from the 2.1.0 release, only <code>macos</code> ones will be kept.</p>
</blockquote>
<hr />
<p>Installation instructions (unchanged):</p>
<ol>
<li>From binaries: run
</li>
</ol>
<pre><code class="language-shell-session">sh <(curl -sL https://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh)
</code></pre>
<p>or download manually from <a href="https://github.com/ocaml/opam/releases/tag/2.0.5">the Github "Releases" page</a> to your PATH. In this case, don't forget to run <code>opam init --reinit -ni</code> to enable sandboxing if you had version 2.0.0~rc manually installed or to update you sandbox script.</p>
<ol start="2">
<li>From source, using opam:
</li>
</ol>
<pre><code class="language-shell-sesiion">opam update; opam install opam-devel
</code></pre>
<p>(then copy the opam binary to your PATH as explained, and don't forget to run <code>opam init --reinit -ni</code> to enable sandboxing if you had version 2.0.0~rc manually installed or to update you sandbox script)</p>
<ol start="3">
<li>From source, manually: see the instructions in the <a href="https://github.com/ocaml/opam/tree/2.0.5#compiling-this-repo">README</a>.
</li>
</ol>
<p>We hope you enjoy this new minor version, and remain open to <a href="https://github.com/ocaml/opam/issues">bug reports</a> and <a href="https://github.com/ocaml/opam/issues">suggestions</a>.</p>
<blockquote>
<p>NOTE: this article is cross-posted on <a href="https://opam.ocaml.org/blog/">opam.ocaml.org</a> and <a href="/blog">ocamlpro.com</a>.</p>
</blockquote>
Résultats de la SMT-Comp 2019 pour Alt-Ergohttps://ocamlpro.com/blog/2019_07_10_results_smt_comp_20192019-07-10T08:12:13Z2019-07-10T08:12:13Z
Albin Coquereau
Les résultats de la compétition SMT-COMP 2019 ont été publiés au whorkshop SMT de la 22e conférence SAT. Nous étions fiers d’y participer pour la deuxième année consécutive, surtout depuis qu’Alt-Ergo prend en charge le standard SMT-LIB 2. Alt-Ergo est un SAT solveur open-source mainte...<p>Les résultats de la compétition SMT-COMP 2019 ont été publiés au whorkshop SMT de la <a href="http://smt2019.galois.com/">22e conférence SAT</a>. Nous étions fiers d’y participer pour la deuxième année consécutive, surtout depuis qu’Alt-Ergo <a href="2019_02_11_whats-new-for-alt-ergo-in-2018-here-is-a-recap">prend en charge</a> le standard <a href="http://smtlib.cs.uiowa.edu/">SMT-LIB 2</a>.</p>
<blockquote>
<p>Alt-Ergo est un SAT solveur open-source maintenu et distribué par OCamlPro, et financé entre autres grâce à plusieurs projets de R&D collaborative (BWare, SOPRANO, Vocal, LChip).</p>
<p>Si vous êtes un utilisateur d’Alt-Ergo, songez à rejoindre le <a href="https://alt-ergo.ocamlpro.com/#club">Club des Utilisateurs d’Alt-Ergo</a>! L’histoire de ce logiciel remonte à 2006, où il est né de recherches académiques conjointes entre Inria et le CNRS dans le laboratoire du LRI. Il est depuis septembre 2013 maintenu, développé & et distribué par OCamlPro (voir l’historique des <a href="https://alt-ergo.ocamlpro.com/#releases">versions passées</a>).</p>
<p><em>Si vous êtes curieux des activités d’OCamlPro dans le domaine des méthodes formelles, vous pouvez lire le court témoignage d’un <a href="http://ocamlpro.com/clients-partners/#mitsubishi-merce">client heureux</a></em></p>
</blockquote>
<p>Voir <a href="2019_07_09_alt-ergo-participation-to-the-smt-comp-2019">/blog/alt-ergo-participation-to-the-smt-comp-2019</a></p>
The Alt-Ergo SMT Solver’s results in the SMT-COMP 2019https://ocamlpro.com/blog/2019_07_09_alt_ergo_participation_to_the_smt_comp_20192019-07-09T08:12:13Z2019-07-09T08:12:13Z
Albin Coquereau
The results of the SMT-COMP 2019 were released a few days ago at the SMT whorkshop during the 22nd SAT conference. We were glad to participate in this competition for the second year in a row, especially as Alt-Ergo now supports the SMT-LIB 2 standard. Alt-Ergo is an open-source SAT-solver maintaine...<p>The results of the SMT-COMP 2019 were released a few days ago at the SMT whorkshop during the <a href="http://smt2019.galois.com/">22nd SAT conference</a>. We were glad to participate in this competition for the second year in a row, especially as Alt-Ergo <a href="2019_02_11_whats-new-for-alt-ergo-in-2018-here-is-a-recap">now supports</a> the SMT-LIB 2 standard.</p>
<blockquote>
<p>Alt-Ergo is an open-source SAT-solver maintained and distributed by OCamlPro and partially funded by R&D projects. If you’re interested, please consider joining the <a href="https://alt-ergo.ocamlpro.com/#club">Alt-Ergo User’s Club</a>! Its history goes back in 2006 from early academic researches conducted conjointly at Inria & CNRS “LRI” lab, and the maintenance and development work by OCamlPro since September 2013 (see the <a href="https://alt-ergo.ocamlpro.com/#releases">past releases</a>).</p>
<p>If you’re curious about OCamlPro’s other activities in Formal Methods, see a happy client’s <a href="/#mitsubishi-merce">feedback</a></p>
</blockquote>
<h2>SMT-COMP 2018</h2>
<p>Our goal last year was to challenge ourselves on the community benchmarks. We wanted to compare Alt-Ergo to state-of-the-art SMT solvers. We thus selected categories close to the “deductive program verification”, as Alt-Ergo is primarily tuned for formulas coming from this application domain. Specifically, we took part in four main tracks categories: ALIA, AUFLIA, AUFLIRA, AUFNIRA. These categories are a combination of theories such as Arrays, Uninterpreted Function and Linear and Non-linear arithmetic over Integers and Reals.</p>
<h3>Alt-Ergo’s Results at SMT-COMP 2018</h3>
<p>For its first participation in SMT-COMP, Alt-Ergo showed that it was a competitive solver comparing to state of the art solvers such as CVC4, Vampire, VeriT or Z3.</p>
<figure class="wp-block-table">
<table>
<tbody>
<tr>
<td>Main Track Categories (number of participants)</td>
<td>Sequential Perfs</td>
<td>Parallel Perfs</td>
</tr>
<tr>
<td><a href="http://smtcomp.sourceforge.net/2018/results-ALIA.shtml?v=1531410683">ALIA</a> (4)</td>
<td><img src="/blog/assets/img/icon_silver.png" alt="2nd place" width="24" height="24"></td>
<td><img src="/blog/assets/img/icon_bronze.png" alt="" width="24" height="24"></td>
</tr>
<tr>
<td><a href="http://smtcomp.sourceforge.net/2018/results-AUFLIA.shtml?v=1531410683">AUFLIA</a> (4)</td>
<td><img src="/blog/assets/img/icon_silver.png" alt="2nd place" width="24" height="24"></td>
<td><img src="/blog/assets/img/icon_silver.png" alt="2nd place" width="24" height="24"></td>
</tr>
<tr>
<td><a href="http://smtcomp.sourceforge.net/2018/results-AUFLIRA.shtml?v=1531410683">AUFLIRA</a> (4)</td>
<td><img src="/blog/assets/img/icon_silver.png" alt="2nd place" width="24" height="24"></td>
<td><img src="/blog/assets/img/icon_bronze.png" alt="" width="24" height="24"></td>
</tr>
<tr>
<td><a href="http://smtcomp.sourceforge.net/2018/results-AUFNIRA.shtml?v=1531410683">AUFNIRA</a> (3)</td>
<td><img src="/blog/assets/img/icon_bronze.png" alt="" width="24" height="24"></td>
<td><img src="/blog/assets/img/icon_bronze.png" alt="" width="24" height="24"></td>
</tr>
</tbody>
</table>
</figure>
<p>The global results of the competition are available <a href="http://smtcomp.sourceforge.net/2018/results-toc.shtml">here</a>.</p>
<h2>SMT-COMP 2019</h2>
<p>Since last year’s competition, we made some improvements on Alt-Ergo, specifically over our data structures and the support of algebraic datatypes (see <a href="http://ocamlpro.com/2019/02/11/whats-new-for-alt-ergo-in-2018-here-is-a-recap">post</a>).</p>
<p>A few changes can be noted for this year’s competition:</p>
<ul>
<li>A distinction between SAT and UNSAT in the scoring scheme allowed us to compete in more categories, as Alt-Ergo doesn’t send back SAT.</li><li>The aim of the 24s Scoring is to reward solvers which solve problems quickly.
</li>
<li>The number of benchmarks in each category has changed. For each category, only the benchmarks which were not proven by every solver last year are used. For example: in the division AUFLIRA, 20011 benchmarks were used last year, of which 1683 remained this year.</li>
</li>
</ul>
<p>Alt-Ergo only competed in the Single Query Track. We selected the same categories as last year and added UF, UFLIA, UFLRA and UFNIA. We also decided to compete over categories supporting algebraic DataTypes to test our newly support of this theory. Alt-Ergo’s expertise is over quantified problems, but we wanted to test our hand in the solver theories over some Quantifier-free categories.</p>
<h3>Alt-Ergo’s Results at SMT-COMP 2019</h3>
<p>We were proud to see Alt-Ergo performs within a reasonable margin on Quantifier Free problems comparing to other solvers over the UNSAT problems, even though these problems are not our solver’s primary goal. And we were happy with the performance of our solver in Datatype categories, as the support of this theory is new.</p>
<p>For the last categories, Alt-Ergo managed to reproduce last year’s performance, close to CVC4 (2018 and 2019 winner) and Vampire.</p>
<figure class="wp-block-table">
<table>
<tbody>
<tr>
<td>Single Query Categories<br>(number of participants)</td>
<td>Sequential</td>
<td>Parallel</td>
<td>Unsat</td>
<td>24s</td>
</tr>
<tr>
<td><a href="https://smt-comp.github.io/2019/results/alia-single-query">ALIA</a> (8)</td>
<td><img src="/blog/assets/img/icon_bronze.png" alt="" width="24" height="24"></td>
<td><img src="/blog/assets/img/icon_bronze.png" alt="" width="24" height="24"></td>
<td><img src="/blog/assets/img/icon_gold.png" alt="" width="33" height="33"></td>
<td><img src="/blog/assets/img/icon_silver.png" alt="" width="24" height="24"></td>
</tr>
<tr>
<td><a href="https://smt-comp.github.io/2019/results/auflia-single-query">AUFLIA</a> (8)</td>
<td><img src="/blog/assets/img/icon_bronze.png" alt="" width="24" height="24"></td>
<td><img src="/blog/assets/img/icon_bronze.png" alt="" width="24" height="24"></td>
<td><img src="/blog/assets/img/icon_bronze.png" alt="" width="24" height="24"></td>
<td><img src="/blog/assets/img/icon_bronze.png" alt="" width="24" height="24"></td>
</tr>
<tr>
<td><a href="https://smt-comp.github.io/2019/results/auflira-single-query">AUFLIRA</a> (8)</td>
<td><img src="/blog/assets/img/icon_bronze.png" alt="" width="24" height="24"></td>
<td><img src="/blog/assets/img/icon_bronze.png" alt="" width="24" height="24"></td>
<td><img src="/blog/assets/img/icon_bronze.png" alt="" width="24" height="24"></td>
<td><img src="/blog/assets/img/icon_bronze.png" alt="" width="24" height="24"></td>
</tr>
<tr>
<td><a href="https://smt-comp.github.io/2019/results/aufnira-single-query">AUFNIRA</a> (5)</td>
<td><img src="/blog/assets/img/icon_bronze.png" alt="" width="24" height="24"></td>
<td><img src="/blog/assets/img/icon_bronze.png" alt="" width="24" height="24"></td>
<td><img src="/blog/assets/img/icon_bronze.png" alt="" width="24" height="24"></td>
<td><img src="/blog/assets/img/icon_silver.png" alt="" width="24" height="24"></td>
</tr>
<tr>
<td><a href="https://smt-comp.github.io/2019/results/uf-single-query">UF</a> (8)</td>
<td><img src="http://ocamlpro.com/wp-content/uploads/2019/07/Copper.png" alt="" width="24" height="24"></td>
<td><img src="http://ocamlpro.com/wp-content/uploads/2019/07/Copper.png" alt="" width="24" height="24"></td>
<td><img src="http://ocamlpro.com/wp-content/uploads/2019/07/Copper.png" alt="" width="24" height="24"></td>
<td><img src="http://ocamlpro.com/wp-content/uploads/2019/07/Copper.png" alt="" width="24" height="24"></td>
</tr>
<tr>
<td><a href="https://smt-comp.github.io/2019/results/uflia-single-query">UFLIA</a> (8)</td>
<td><img src="http://ocamlpro.com/wp-content/uploads/2019/07/Copper.png" alt="" width="24" height="24"></td>
<td><img src="http://ocamlpro.com/wp-content/uploads/2019/07/Copper.png" alt="" width="24" height="24"></td>
<td><img src="http://ocamlpro.com/wp-content/uploads/2019/07/Copper.png" alt="" width="24" height="24"></td>
<td><img src="/blog/assets/img/icon_bronze.png" alt="" width="24" height="24"></td>
</tr>
<tr>
<td><a href="https://smt-comp.github.io/2019/results/uflra-single-query">UFLRA</a> (8)</td>
<td><img src="/blog/assets/img/icon_gold.png" alt="" width="33" height="33"></td>
<td><img src="/blog/assets/img/icon_gold.png" alt="" width="33" height="33"></td>
<td><img src="/blog/assets/img/icon_bronze.png" alt="" width="24" height="24"></td>
<td><img src="/blog/assets/img/icon_gold.png" alt="" width="33" height="33"></td>
</tr>
<tr>
<td><a href="https://smt-comp.github.io/2019/results/ufnia-single-query">UFNIA</a> (8)</td>
<td><img src="/blog/assets/img/icon_bronze.png" alt="" width="24" height="24"></td>
<td><img src="/blog/assets/img/icon_bronze.png" alt="" width="24" height="24"></td>
<td><img src="/blog/assets/img/icon_bronze.png" alt="" width="24" height="24"></td>
<td><img src="/blog/assets/img/icon_bronze.png" alt="" width="24" height="24"></td>
</tr>
</tbody>
</table>
</figure>
<p>This year results are available <a href="https://smt-comp.github.io/2019/results.html">here</a>. These results do not include Par4 a portfolio solver.</p>
<p>Alt-Ergo is constantly evolving, as well as our support of the SMT-LIB standard. For next year’s participation, we will try to compete in more categories and hope to cover more tracks, such as the UNSAT-Core track.</p>
<p><img src="/assets/img/logo_altergo.png" alt="" /></p>
Blockchains @ OCamlPro: an Overviewhttps://ocamlpro.com/blog/2019_04_29_blockchains_at_ocamlpro_an_overview2019-04-29T08:12:13Z2019-04-29T08:12:13Z
Fabrice Le Fessant
OCamlPro started working on blockchains in 2014, when Arthur Breitman came to us with an initial idea to develop the Tezos ledger. The idea was very challenging with a lot of innovations. So, we collaborated with him to write a specification, and to turn the specification into OCaml code. Since then...<p>OCamlPro started working on blockchains in 2014, when Arthur Breitman
came to us with an initial idea to develop the Tezos ledger. The idea
was very challenging with a lot of innovations. So, we collaborated
with him to write a specification, and to turn the specification into
OCaml code. Since then, we continually improved our skills in this
domain, trained more engineers, introduced the technology to students
and to professionals, advised a dozen projects, developed tools and
libraries, made some improvements and extensions to the official Tezos
node, and conducted several private deployments of the Tezos ledger.</p>
<blockquote>
<p>For an overview of OCamlPro’s blockchain activities see <a href="/blog/category/blockchains">here</a></p>
</blockquote>
<h2>TzScan: A complete Block Explorer for Tezos</h2>
<p><a href="https://tzscan.io">TzScan</a> is considered today to be the best block
explorer for Tezos. It’s made of three main components:</p>
<ul>
<li>an indexer that queries the Tezos node and fills a relational
database,
</li>
<li>an API server that queries the database to retrieve various
informations,
</li>
<li>a web based user interface (a Javascript application)
</li>
</ul>
<p>We deployed the indexer and API to freely provide the community with
an access to all the content of the Tezos blockchain, already used by
many websites, wallets and apps. In addition, we directly use this API
within our TzScan.io instance. Our deployment spans on multiple Tezos
nodes, multiple API servers and a distributed database to scale and
reply to millions of queries per day. We also regularly release open
source versions under the GPL license, that can be easily deployed on
private Tezos networks. TzScan’s development has been initiated in
September 2017. It represents today an enormous investment, that the
Tezos Foundation helped partially fund in July 2018.</p>
<blockquote>
<p>Contact us for support, advanced features, advertisement, or if you need a private deployment of the TzScan infrastructure.</p>
</blockquote>
<h2>Liquidity: a Smart Contract Language for Tezos</h2>
<p><a href="https://www.liquidity-lang.org">Liquidity</a> is the first high-level
language for Tezos over Michelson. Its development began in April
2017, a few months before the Tezos fundraising in July 2017. It is
today the most advanced language for Tezos: it offers OCaml-like and
ReasonML-like syntaxes for writing smart contracts, compilation and
de-compilation to/from Michelson, multiple-entry points, static
type-checking à la ML, etc. Its
<a href="https://www.liquidity-lang.org/edit">online editor</a> allows to develop smart
contracts and to deploy them directly into the alphanet or
mainnet. Liquidity has been used before the mainnet launch to
de-compile the Foundation’s vesting smart contracts in order to review
them. This smart contract language represents more than two years of
work, and is fully funded by OCamlPro. It has been developed with
formal verification in mind, formal verification being one of the
selling points of Tezos. We have elaborated a detailed roadmap mixing
model-checking and deductive program verification to investigate this
feature. We are now searching for funding opportunities to keep
developing and maintaining Liquidity.</p>
<blockquote>
<p>See our <a href="https://www.liquidity-lang.org/edit">online editor</a> to get started ! Contact us if you need support, training, writing or in-depth analysis of your smart contracts.</p>
</blockquote>
<h2>Techelson: a testing framework for Michelson and Liquidity</h2>
<p><a href="https://ocamlpro.github.io/techelson/">Techelson</a> is our newborn in
the set of tools for the Tezos blockchain. It is a test execution
engine for the functional properties of Michelson and Liquidity
contracts. Techelson is still in its early development stage. The user
documentation <a href="https://ocamlpro.github.io/techelson/user_doc/">is available
here</a>. An example on
how to use it with Liquidity is detailed in <a href="https://adrienchampion.github.io/blog/tezos/techelson/with_liquidity/index.html">this
post</a>.</p>
<blockquote>
<p>Contact us to customize the engine to suit your own needs!</p>
</blockquote>
<h2>IronTez: an optimized Tezos node by OCamlPro</h2>
<p>IronTez is a tailored node for private (and public) deployments of
Tezos. Among its additional features, the node adds some useful RPCs,
improves storage, enables garbage collection and context pruning,
allows an easy configuration of the private network, provides
additional Michelson instructions (GET_STORAGE, CATCH…). One of its
nice features is the ability to enable adaptive baking in private /
proof-of-authority setting (eg. baking every 5 seconds in presence of
transactions and every 10 minutes otherwise, etc.).</p>
<p>A simplified version of IronTez has already been made public to allow
testing its <a href="/blog/2019_02_04_improving_tezos_storage_gitlab_branch_for_testers">improved storage system,
Ironmin</a>,
showing a 10x reduction in storage. Some TzScan.io nodes are also
using versions of IronTez. We’ve also successfully deployed it along
with TzScan for a big foreign company to experiment with private
blockchains. We are searching for projects and funding opportunities
to keep developing and maintaining this optimized version of the Tezos
node.</p>
<blockquote>
<p>Don’t hesitate to contact us if you want to deploy a blockchain with IronTez, or for more information !</p>
</blockquote>
<h1>Comments</h1>
<p>Kristen (3 May 2019 at 0 h 30 min):</p>
<blockquote>
<p>I really wanted to keep using IronTez but I ran into bugs that have not yet been fixed, the code is out of date with upstream, and there is no real avenue for support/assistance other than email.</p>
</blockquote>
opam 2.0.4 releasehttps://ocamlpro.com/blog/2019_04_10_opam_2.0.4_release2019-04-10T08:12:13Z2019-04-10T08:12:13Z
Raja Boujbel
Louis Gesbert
We are pleased to announce the release of opam 2.0.4. This new version contains some backported fixes: Sandboxing on macOS: considering the possibility that TMPDIR is unset [#3597 @herbelin - fix #3576]
display: Fix opam config var display, aligned on opam config list [#3723 @rjbou - rel. #3717]
pin...<p>We are pleased to announce the release of <a href="https://github.com/ocaml/opam/releases/tag/2.0.4">opam 2.0.4</a>.</p>
<p>This new version contains some <a href="https://github.com/ocaml/opam/pull/3805">backported fixes</a>:</p>
<ul>
<li>Sandboxing on macOS: considering the possibility that TMPDIR is unset [<a href="https://github.com/ocaml/opam/pull/3597">#3597</a> <a href="https://github.com/herbelin">@herbelin</a> - fix <a href="https://github.com/ocaml/opam/issues/3576">#3576</a>]
</li>
<li>display: Fix <code>opam config var</code> display, aligned on <code>opam config list</code> [<a href="https://github.com/ocaml/opam/pull/3723">#3723</a> <a href="https://github.com/rjbou">@rjbou</a> - rel. <a href="https://github.com/ocaml/opam/issues/3717">#3717</a>]
</li>
<li>pin:
<ul>
<li>update source of (version) pinned directory [<a href="https://github.com/ocaml/opam/pull/3726">#3726</a> <a href="https://github.com/rjbou">@rjbou</a> - <a href="https://github.com/ocaml/opam/issues/3651">#3651</a>]
</li>
<li>fix <code>--ignore-pin-depends</code> with autopin [<a href="https://github.com/ocaml/opam/pull/3736">#3736</a> <a href="https://github.com/AltGr">@AltGr</a>]
</li>
<li>fix pinnings not installing/upgrading already pinned packages (introduced in 2.0.2) [<a href="https://github.com/ocaml/opam/pull/3800">#3800</a> <a href="https://github.com/AltGr">@AltGr</a>]
</li>
</ul>
</li>
<li>opam clean: Ignore errors trying to remove directories [<a href="https://github.com/ocaml/opam/pull/3732">#3732</a> <a href="https://github.com/kit">@kit-ty-kate</a>]
</li>
<li>remove wrong "mismatched extra-files" warning [<a href="https://github.com/ocaml/opam/pull/3744">#3744</a> <a href="https://github.com/rjbou">@rjbou</a>]
</li>
<li>urls: fix hg opam 1.2 url parsing [<a href="https://github.com/ocaml/opam/pull/3754">#3754</a> <a href="https://github.com/rjbou">@rjbou</a>]
</li>
<li>lint: update message of warning 47, to avoid confusion because of missing <code>synopsis</code> field internally inferred from <code>descr</code> [<a href="https://github.com/ocaml/opam/pull/3753">#3753</a> <a href="https://github.com/rjbou">@rjbou</a> - fix <a href="https://github.com/ocaml/opam/issues/3738">#3738</a>]
</li>
<li>system:
<ul>
<li>lock & signals: don't interrupt at non terminal signals [<a href="https://github.com/ocaml/opam/pull/3541">#3541</a> <a href="https://github.com/rjbou">@rjbou</a>]
</li>
<li>shell: fix fish manpath setting [<a href="https://github.com/ocaml/opam/pull/3728">#3728</a> <a href="https://github.com/gregory">@gregory-nisbet</a>]
</li>
<li>git: use <code>diff.noprefix=false</code> config argument to overwrite user defined configuration [<a href="https://github.com/ocaml/opam/pull/3788">#3788</a> <a href="https://github.com/rjbou">@rjbou</a>, <a href="https://github.com/ocaml/opam/pull/3628">#3628</a> <a href="https://github.com/Blaisorblade">@Blaisorblade</a> - fix <a href="https://github.com/ocaml/opam/issues/3627">#3627</a>]
</li>
</ul>
</li>
<li>dirtrack: fix precise tracking mode [<a href="https://github.com/ocaml/opam/pull/3796">#3796</a> <a href="https://github.com/rjbou">@rjbou</a>]
</li>
<li>fix some mispellings [<a href="https://github.com/ocaml/opam/pull/3731">#3731</a> <a href="https://github.com/MisterDA">@MisterDA</a>]
</li>
<li>CI enhancement & fixes [<a href="https://github.com/ocaml/opam/pull/3706">#3706</a> <a href="https://github.com/dra27">@dra27</a>, <a href="https://github.com/ocaml/opam/pull/3748">#3748</a> <a href="https://github.com/rjbou">@rjbou</a>, <a href="https://github.com/ocaml/opam/pull/3801">#3801</a> <a href="https://github.com/rjbou">@rjbou</a>]
</li>
</ul>
<blockquote>
<p>Note: To homogenise macOS name on system detection, we decided to keep <code>macos</code>, and convert <code>darwin</code> to <code>macos</code> in opam. For the moment, to not break jobs & CIs, we keep uploading <code>darwin</code> & <code>macos</code> binaries, but from the 2.1.0 release, only <code>macos</code> ones will be kept.</p>
</blockquote>
<hr />
<p>Installation instructions (unchanged):</p>
<ol>
<li>From binaries: run
</li>
</ol>
<pre><code class="language-shell-session">sh <(curl -sL https://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh)
</code></pre>
<p>or download manually from <a href="https://github.com/ocaml/opam/releases/tag/2.0.4">the Github "Releases" page</a> to your PATH. In this case, don't forget to run <code>opam init --reinit -ni</code> to enable sandboxing if you had version 2.0.0~rc manually installed or to update you sandbox script.</p>
<ol start="2">
<li>From source, using opam:
</li>
</ol>
<pre><code class="language-shell-session">opam update; opam install opam-devel
</code></pre>
<p>(then copy the opam binary to your PATH as explained, and don't forget to run <code>opam init --reinit -ni</code> to enable sandboxing if you had version 2.0.0~rc manually installed or to update you sandbox script)</p>
<ol start="3">
<li>From source, manually: see the instructions in the <a href="https://github.com/ocaml/opam/tree/2.0.4#compiling-this-repo">README</a>.
</li>
</ol>
<p>We hope you enjoy this new minor version, and remain open to <a href="https://github.com/ocaml/opam/issues">bug reports</a> and <a href="https://github.com/ocaml/opam/issues">suggestions</a>.</p>
<blockquote>
<p>NOTE: this article is cross-posted on <a href="https://opam.ocaml.org/blog/">opam.ocaml.org</a> and <a href="/blog">ocamlpro.com</a>.</p>
</blockquote>
opam 2.0 tipshttps://ocamlpro.com/blog/2019_03_12_opam_2.0_tips2019-03-12T08:12:13Z2019-03-12T08:12:13Z
Louis Gesbert
This blog post looks back on some of the improvements in opam 2.0, and gives tips on the new workflows available. Package development environment management Opam 2.0 has been vastly improved to handle locally defined packages. Assuming you have a project ~/projects/foo, defining two packages foo-lib...<p>This blog post looks back on some of the improvements in opam 2.0, and gives
tips on the new workflows available.</p>
<h2>Package development environment management</h2>
<p>Opam 2.0 has been vastly improved to handle locally defined packages. Assuming
you have a project <code>~/projects/foo</code>, defining two packages <code>foo-lib</code> and
<code>foo-bin</code>, you would have:</p>
<pre><code class="language-shell-session">~/projects/foo
|-- foo-lib.opam
|-- foo-bin.opam
`-- src/ ...
</code></pre>
<p>(See also about
<a href="../opam-extended-dependencies/#Computed-versions">computed dependency constraints</a>
for handling multiple package definitions with mutual constraints)</p>
<h3>Automatic pinning</h3>
<p>The underlying mechanism is the same, but this is an interface improvement that
replaces most of the opam 1.2 workflows based on <code>opam pin</code>.</p>
<p>The usual commands (<code>install</code>, <code>upgrade</code>, <code>remove</code>, etc.) have been extended to
support specifying a directory as argument. So when working on project <code>foo</code>,
just write:</p>
<pre><code class="language-shell-session">cd ~/projects/foo
opam install .
</code></pre>
<p>and both <code>foo-lib</code> and <code>foo-bin</code> will get automatically pinned to the current
directory (using git if your project is versioned), and installed. You may
prefer to use:</p>
<pre><code class="language-shell-session">opam install . --deps-only
</code></pre>
<p>to just get the package dependencies ready before you start hacking on it.
<a href="#Reproducing-build-environments">See below</a> for details on how to reproduce a
build environment more precisely. Note that <code>opam depext .</code> will not work at the
moment, which will be fixed in the next release when the external dependency
handling is integrated (opam will still list you the proper packages to install
for your OS upon failure).</p>
<p>If your project is versioned and you made changes, remember to either commit, or
add <code>--working-dir</code> so that your uncommitted changes are taken into account.</p>
<h2>Local switches</h2>
<blockquote>
<p>Opam 2.0 introduced a new feature called "local switches". This section
explains what it is about, why, when and how to use them.</p>
</blockquote>
<p>Opam <em>switches</em> allow to maintain several separate development environments,
each with its own set of packages installed. This is particularly useful when
you need different OCaml versions, or for working on projects with different
dependency sets.</p>
<p>It can sometimes become tedious, though, to manage, or remember what switch to
use with what project. Here is where "local switches" come in handy.</p>
<h3>How local switches are handled</h3>
<p>A local switch is simply stored inside a <code>_opam/</code> directory, and will be
selected automatically by opam whenever your current directory is below its
parent directory.</p>
<blockquote>
<p>NOTE: it's highly recommended that you enable the new <em>shell hooks</em> when using
local switches. Just run <code>opam init --enable-shell-hook</code>: this will make sure
your PATH is always set for the proper switch.</p>
<p>You will otherwise need to keep remembering to run <code>eval $(opam env)</code> every
time you <code>cd</code> to a directory containing a local switch. See also
<a href="http://opam.ocaml.org/doc/Tricks.html#Display-the-current-quot-opam-switch-quot-in-the-prompt">how to display the current switch in your prompt</a></p>
</blockquote>
<p>For example, if you have <code>~/projects/foo/_opam</code>, the switch will be selected
whenever in project <code>foo</code>, allowing you to tailor what it has installed for the
needs of your project.</p>
<p>If you remove the switch dir, or your whole project, opam will forget about it
transparently. Be careful not to move it around, though, as some packages still
contain hardcoded paths and don't handle relocation well (we're working on
that).</p>
<h3>Creating a local switch</h3>
<p>This can generally start with:</p>
<pre><code class="language-shell-session">cd ~/projects/foo
opam switch create . --deps-only
</code></pre>
<p>Local switch handles are just their path, instead of a raw name. Additionally,
the above will detect package definitions present in <code>~/projects/foo</code>, pick a
compatible version of OCaml (if you didn't explicitely mention any), and
automatically install all the local package dependencies.</p>
<p>Without <code>--deps-only</code>, the packages themselves would also get installed in the
local switch.</p>
<h3>Using an existing switch</h3>
<p>If you just want an already existing switch to be selected automatically,
without recompiling one for each project, you can use <code>opam switch link</code>:</p>
<pre><code class="language-shell-session">cd ~/projects/bar
opam switch link 4.07.1
</code></pre>
<p>will make sure that switch <code>4.07.1</code> is chosen whenever you are in project <code>bar</code>.
You could even link to <code>../foo</code> here, to share <code>foo</code>'s local switch between the
two projects.</p>
<h2>Reproducing build environments</h2>
<h4>Pinnings</h4>
<p>If your package depends on development versions of some dependencies (e.g. you
had to push a fix upstream), add to your opam file:</p>
<pre><code class="language-shell-session">depends: [ "some-package" ] # Remember that pin-depends are depends too
pin-depends: [
[ "some-package.version" "git+https://gitfoo.com/blob.git#mybranch" ]
]
</code></pre>
<p>This will have no effect when your package is published in a repository, but
when it gets pinned to its dev version, opam will first make sure to pin
<code>some-package</code> to the given URL.</p>
<h4>Lock-files</h4>
<p>Dependency contraints are sometimes too wide, and you don't want to explore all
the versions of your dependencies while developing. For this reason, you may
want to reproduce a known-working set of dependencies. If you use:</p>
<pre><code class="language-shell-session">opam lock .
</code></pre>
<p>opam will check what version of the dependencies are installed in your current
switch, and explicit them in <code>*.opam.locked</code> files. <code>opam lock</code> is a plugin at
the moment, but will get automatically installed when needed.</p>
<p>Then, assuming you checked these files into version control, any user can do</p>
<pre><code class="language-shell-session">opam install . --deps-only --locked
</code></pre>
<p>to instruct opam to reproduce the same build environment (the <code>--locked</code> option
is also available to <code>opam switch create</code>, to make things easier).</p>
<p>The generated lock-files will also contain added constraints to reproduce the
presence/absence of optional dependencies, and reproduce the appropriate
dependency pins using <code>pin-depends</code>. Add the <code>--direct-only</code> option if you don't
want to enforce the versions of all recursive dependencies, but only direct
ones.</p>
Release : Liquidity version 1.0 ! https://ocamlpro.com/blog/2019_03_09_release_liquidity_v1_smart_contracts_language2019-03-09T08:12:13Z2019-03-09T08:12:13Z
Çagdas Bozman
Nous sommes fiers d'annoncer la release de la première version majeure de Liquidity, le langage de smart contracts et son outillage. Parmi les fonctions phares : multiples points d'entrée, système de contrats modulaire, polymorphisme et inférence de type, syntaxe ReasonML pour une plus grande ad...<p>Nous sommes fiers d'annoncer la release de la première version majeure de Liquidity, le langage de smart contracts et son outillage. Parmi les fonctions phares : multiples points d'entrée, système de contrats modulaire, polymorphisme et inférence de type, syntaxe ReasonML pour une plus grande adoption, etc.</p>
<p>Voir <a href="/blog/2019_03_08_announcing_liquidity_version_1_0">cet article !</a></p>
Announcing Liquidity version 1.0https://ocamlpro.com/blog/2019_03_08_announcing_liquidity_version_1_02019-03-08T08:12:13Z2019-03-08T08:12:13Z
Alain Mebsout
Liquidity version 1.0 We are pleased to announce the release of the first major version of the Liquidity smart-contract language and associated tools. Some of the highlights of this version are detailed below. Multiple Entry Points In the previous versions of Liquidity, smart contracts were limited ...<h1>Liquidity version 1.0</h1>
<p>We are pleased to announce the release of the first major version of the Liquidity smart-contract language and associated tools.</p>
<p>Some of the highlights of this version are detailed below.</p>
<h3>Multiple Entry Points</h3>
<p>In the previous versions of Liquidity, smart contracts were limited to a single entry point (named <code>main</code>). But traditionally smart contracts executions path depend strongly on the parameter and in most cases they are completely distinct.</p>
<p>Having different entry points allows to separate code that do not overlap and which usually accomplish vastly different tasks. Encoding entry points with complex pattern matching constructs before was tedious and made the code not extremely readable. This new feature gives you readability and allows to call contracts in a natural way.</p>
<p>Internally, entry points are encoded with sum types and pattern matching so that you keep the strong typing guarantees that come over from Michelson. This means that you cannot call a typed smart contract with the wrong entry point or the wrong parameter (this is enforced statically by both the Liquidity typechecker and the Michelson typechecker).</p>
<h3>Modules and Contract System</h3>
<p>Organizing, encapsulating and sharing code is not always easy when you need to write thousand lines files. Liquidity now allows to write modules (which contain types and values/functions) and contracts (which define entry points in addition). Types and non-private values of contracts and modules in scope can be accessed by other modules and contracts.</p>
<p>You can even compile several files at once with the command line compiler, so that you may organize your multiple smart contract projects in libraries and files.</p>
<h3>Polymorphism and Type Inference</h3>
<p>Thanks to a new and powerful type inference algorithm, you can now get rid of almost all type annotations in the smart contracts.</p>
<p>Instead of writing something like</p>
<pre><code class="language-ocaml">let%entry main (parameter : bool) (storage : int) =
let ops = ([] : operation list) in
let f (c : bool) = if not c then 1 else 2 in
ops, f parameter
</code></pre>
<p>you can now write</p>
<pre><code class="language-ocaml">let%entry main parameter _ =
let ops = [] in
let f c = if not c then 1 else 2 in
ops, f parameter
</code></pre>
<p>And type inference works with polymorhpism (also a new feature of this release) so you can now write generic and reusable functions:</p>
<pre><code class="language-ocaml">type 'a t = { x : 'a set; y : 'a }
let mem_t v = Set.mem v.y v.x
</code></pre>
<p>Inference also works with contract types and entry points.</p>
<h3>ReasonML Syntax</h3>
<p>We originally used a modified version of the OCaml syntax for the Liquidity language. This made the language accessible, almost for free, to all OCaml and functional language developers. The typing discipline one needs is quite similar to other strongly typed functional languages so this was a natural fit.</p>
<p>However this is not the best fit for everyone. We want to bring the power of Liquidity and Tezos to the masses so adopting a seemingly familiar syntax for most people can help a lot. With this new version of Liquidity, you can now write your smart contracts in both an OCaml-like syntax or a <a href="https://reasonml.github.io">ReasonML</a>-like one. The latter being a lot closer to Javascript on the surface, making it accessible to people that already know the language or people that write smart contracts for other platforms like Solidity/Ethereum.</p>
<p>You can see the full changelog as well as download the latest release and binaries <a href="https://github.com/OCamlPro/liquidity/releases">at this address</a>.</p>
<p>Don't forget that you can also try all these new cool features and more directly in your browser with our <a href="https://www.liquidity-lang.org/edit/">online editor</a>.</p>
Release de Techelson, moteur de tests pour Michelson et Liquidity https://ocamlpro.com/blog/2019_03_07_fr_release_de_techelson_moteur_de_tests_pour_michelson_et_liquidity2019-03-07T08:12:13Z2019-03-07T08:12:13Z
Adrien Champion
Nous sommes fiers d’annoncer la première release de Techelson, moteur d’exécution de tests pour Michelson. Les programmeurs Liquidity peuvent également l’utiliser. Voir Techelson, a test execution engine for Michelson....<p>Nous sommes fiers d’annoncer la première release de Techelson, moteur d’exécution de tests pour Michelson. Les programmeurs Liquidity peuvent également l’utiliser.</p>
<p>Voir <a href="/2019/03/05/techelson-a-test-execution-engine-for-michelson/">Techelson, a test execution engine for Michelson</a>.</p>
Techelson, a test execution engine for Michelsonhttps://ocamlpro.com/blog/2019_03_06_techelson_a_test_execution_engine_for_michelson2019-03-06T08:12:13Z2019-03-06T08:12:13Z
Adrien Champion
We are pleased to announce the first release of Techelson, available here. Techelson is a Test Execution Engine for Michelson. It aims at testing functional properties of Michelson smart contracts. Make sure to check the user documentation to get a sense of Techelson's workflow and features. For Liq...<p>We are pleased to announce the first release of <a href="https://ocamlpro.github.io/techelson/">Techelson,</a> available <a href="https://github.com/OCamlPro/techelson/releases/tag/v0.7.0">here</a>.</p>
<p>Techelson is a Test Execution Engine for Michelson. It aims at testing functional properties of Michelson smart contracts. Make sure to check the <a href="https://ocamlpro.github.io/techelson/user_doc">user documentation</a> to get a sense of Techelson's workflow and features.</p>
<p>For Liquidity programmers interested in Techelson, take a look at <a href="https://adrienchampion.github.io/blog/tezos/techelson/with_liquidity/index.html">this blog post</a> discussing how to write tests in Liquidity and run them using Techelson.</p>
<p>Techelson is still young: if you have problems, suggestions or feature requests please <a href="https://github.com/OCamlPro/techelson/issues">open an issue on the repository</a>.</p>
Signing Data for Smart Contracts https://ocamlpro.com/blog/2019_03_05_signing_data_for_smart_contracts2019-03-05T08:12:13Z2019-03-05T08:12:13Z
Çagdas Bozman
Smart contracts calls already provide a built-in authentication mechanism as transactions (i.e. call operations) are cryptographically signed by the sender of the transaction. This is a guarantee on which programs can rely. However, sometimes you may want more involved or flexible authentication sch...<p>Smart contracts calls already provide a built-in authentication mechanism as transactions (i.e. call operations) are cryptographically signed by the sender of the transaction. This is a guarantee on which programs can rely.</p>
<p>However, sometimes you may want more involved or flexible authentication schemes. The ones that rely on signature validity checking can be implemented in Michelson, and Liquidity provide a built-in instruction to do so. (You still need to keep in mind that you cannot store unencrypted confidential information on the blockchain).</p>
<p>This instruction is <code>Crypto.check</code> in Liquidity. Its type can be written as:</p>
<pre><code class="language-ocaml">Crypto.check: key -> signature -> bytes -> bool
</code></pre>
<p>Which means that it takes as arguments a public key, a signature and a sequence of bytes and returns a Boolean. <code>Crypto.check pub_key signature message</code> is <code>true</code> if and only if the signature <code>signature</code> was obtained by signing the Blake2b hash of <code>message</code> using the private key corresponding to the public key <code>pub_key</code>.</p>
<p>A small smart contract snippet which implements a signature check (against a predefined public key kept in the smart contract's storage) <a href="https://liquidity-lang.org/edit?source=type+storage+%3D+key%0A%0Alet%25entry+main+%28%28message+%3A+string%29%2C+%28signature+%3A+signature%29%29+key+%3D%0A++let+bytes+%3D+Bytes.pack+message+in%0A++if+not+%28Crypto.check+key+signature+bytes%29+then%0A++++failwith+%22Wrong+signature%22%3B%0A++%28%5B%5D+%3A+operation+list%29%2C+key%0A">can be tested online here.</a></p>
<pre><code class="language-ocaml">type storage = key
let%entry main ((message : string), (signature : signature)) key =
let bytes = Bytes.pack message in
if not (Crypto.check key signature bytes) then
failwith "Wrong signature";
([] : operation list), key
</code></pre>
<p>This smart contract fails if the string <code>message</code> was not signed with the private key corresponding to the public key <code>key</code> stored. Otherwise it does nothing.</p>
<p>This signature scheme is more flexible than the default transaction/sender one, however it requires that the signature can be built outside of the smart contract. (And more generally outside of the toolset provided by Liquidity and Tezos). On the other hand, signing a transaction is something you get for free if you use the tezos client or any tezos wallet (as is it essentially their base function).</p>
<p>The rest of this blog post will focus on various ways to sign data, and on getting signatures that can be used in Tezos and Liquidity directly.</p>
<h3>Signing Using the Tezos Client</h3>
<p>One (straightforward) way to sign data is to use the Tezos client directly. You will need to be connected to a Tezos node though as the client makes RPCs to serialize data (this operation is protocol dependent). We can only sign sequences of bytes, so the first thing we need to do is to serialize whichever data we want to sign. This can be done with the command <code>hash data</code> of the client.</p>
<pre><code class="language-shell-session">$ ./tezos-client -A alphanet-node.tzscan.io -P 80 hash data '"message"' of type string
Raw packed data:
0x0501000000076d657373616765
Hash:
exprtXaZciTDGatZkoFEjE1GWPqbJ7FtqAWmmH36doxBreKr6ADcYs
Raw Blake2b hash:
0x01978930fd2d04d0db8c2e4ef8a3f5d63b8e732177c8723135ed0dc7d99ebed3
Raw Sha256 hash:
0x32569319f6517036949bcead23a761bfbfcbf4277b010355884a86ba09349839
Raw Sha512 hash:
0xdfa4ea9f77db3a98654f101be1d33d56898df40acf7c2950ca6f742140668a67fefbefb22b592344922e1f66c381fa2bec48aa47970025c7e61e35d939ae3ca0
Gas remaining: 399918 units remaining
</code></pre>
<p>This command gives the result of hashing the data using various algorithms but what we're really interested in is the first item <code>Raw packed data</code> which is the serialized version of our data (<code>"message"</code>) : <code>0x0501000000076d657373616765</code>.</p>
<p>We can now sign these bytes using the Tezos client as well. This step can be performed completely offline, for that we need to use the option <code>-p</code> of the client to specify the protocol we want to use (the <code>sign bytes</code> command will not be available without first selecting a valid protocol). Here we use protocol 3, designated by its hash <code>PsddFKi3</code>.</p>
<pre><code class="language-shell-session">$ ./tezos-client -p PsddFKi3 sign bytes 0x0501000000076d657373616765 for my_account
Signature:
edsigto9QHtXMyxFPyvaffRfFCrifkw2n5ZWqMxhGRzieksTo8AQAFgUjx7WRwqGPh4rXTBGGLpdmhskAaEauMrtM82T3tuxoi8
</code></pre>
<p>The account <code>my_account</code> can be any imported account in the Tezos client. In particular, it can be an encrypted key pair (you will need to enter a password to sign) or a hardware Ledger (you will need to confirm the signature on the Ledger). The obtained signature can be used as is with Liquidity or Michelson. This one starts with <code>edsig</code> because it was obtained using an Ed25519 private key, but you can also get signatures starting with <code>spsig1</code> or <code>p2sig</code> depending on the cryptographic curve that you use.</p>
<h3>Signing Manually</h3>
<p>In this second section we detail the necessary steps and provide a Python script to sign string messages using an Ed25519 private key. This can be easily adapted for other signing schemes.</p>
<p>These are the steps that will need to be performed in order to sign a string:</p>
<ul>
<li>Assuming that the value you want to sign is a string, you first need to convert its ASCII version to hexa, for the string <code>"message"</code> that is <code>6d657373616765</code>.
</li>
<li>You need to produce the packed version of the corresponding Michelson expression. The binary representation can vary depending on the types of the values you want to pack but for strings it is:
</li>
</ul>
<pre><code class="language-michelson">| 0x | 0501 | [size of the string on 4 bytes] | [ascii string in hexa] |
</code></pre>
<p>for <code>"message"</code> (of length 7), it is</p>
<pre><code class="language-michelson">| 0x | 0501 | 00000007 | 6d657373616765 |
</code></pre>
<p>or <code>0x0501000000076d657373616765</code>.</p>
<ul>
<li>Hash this value using <a href="https://en.wikipedia.org/wiki/BLAKE_(hash_function)">Blake2b</a> (<code>01978930fd2d04d0db8c2e4ef8a3f5d63b8e732177c8723135ed0dc7d99ebed3</code>) which is 32 bytes long.
</li>
<li>Depending on your public key, you then need to sign it with the corresponding curve (ed25519 for edpk keys), the signature is 64 bytes:
</li>
</ul>
<pre><code class="language-michelson">753e013b8515a7d47eaa5424de5efa2f56620ac8be29d08a6952ae414256eac44b8db71f74600275662c8b0c226f3280e9d24e70a5fa83015636b98059b5180c
</code></pre>
<ul>
<li>Optionally convert to base58check. This is not needed because Liquidity and Michelson allow signatures (as well as keys and key hashes) to be given in hex format with a 0x:
</li>
</ul>
<pre><code class="language-michelson">0x753e013b8515a7d47eaa5424de5efa2f56620ac8be29d08a6952ae414256eac44b8db71f74600275662c8b0c226f3280e9d24e70a5fa83015636b98059b5180c
</code></pre>
<p>The following Python (3) script will do exactly this, entirely offline. Note that this is just an toy example, and should not be used in production. In particular you need to give your private key on the command line so this might not be secure if the machine you run this on is not secure.</p>
<pre><code class="language-shell-session">$ pip3 install base58check pyblake2 ed25519
> python3 ./sign_string.py "message" edsk2gL9deG8idefWJJWNNtKXeszWR4FrEdNFM5622t1PkzH66oH3r
0x753e013b8515a7d47eaa5424de5efa2f56620ac8be29d08a6952ae414256eac44b8db71f74600275662c8b0c226f3280e9d24e70a5fa83015636b98059b5180c
</code></pre>
<h4><code>sign_string.py</code></h4>
<pre><code class="language-python">from pyblake2 import blake2b
import base58check
import ed25519
import sys
message = sys.argv[1]
seed_b58 = sys.argv[2]
prefix = b'x05x01'
len_bytes = (len(message)).to_bytes(4, byteorder='big')
h = blake2b(digest_size=32)
b = bytearray()
b.extend(message.encode())
h.update(prefix + len_bytes + b)
digest = h.digest()
seed = base58check.b58decode(seed_b58)[4:-4]
sk = ed25519.SigningKey(seed)
sig = sk.sign(digest)
print("0x" + sig.hex())
</code></pre>
What's new for Alt-Ergo in 2018? Here is a recap!https://ocamlpro.com/blog/2019_02_11_whats_new_for_alt_ergo_in_2018_here_is_a_recap2019-02-11T08:12:13Z2019-02-11T08:12:13Z
Mohamed Iguernlala
After the hard work done on the integration of floating-point arithmetic reasoning two years ago, 2018 is the year of polymorphic SMT2 support and efficient SAT solving for Alt-Ergo. In this post, we recap the main novelties last year, and we announce the first Alt-Ergo Users’ Club meeting. An SMT...<p>After the hard work done on the integration of floating-point arithmetic reasoning two years ago, 2018 is the year of polymorphic SMT2 support and efficient SAT solving for Alt-Ergo. In this post, we recap the main novelties last year, and we announce the first Alt-Ergo Users’ Club meeting.</p>
<h2>An SMT2 front-end with prenex polymorphism</h2>
<p>As you may know, Alt-Ergo’s native input language is not compliant with the SMT-LIB 2 input language standard, and translating formulas from SMT-LIB 2 to Alt-Ergo’ syntax (or vice-versa) is not immediate. Besides its extension with polymorphism, this native language diverges from SMT-LIB’s by distinguishing terms of type <code>boolean</code> from formulas (that are <code>propositions</code>). This distinction makes it hard, for instance, to efficiently translate <code>let-in</code> and <code>if-then-else</code> constructs that are ubiquitous in SMT-LIB 2 benchmarks.</p>
<p>In order to work closely with the SMT community, we designed a conservative extension of the SMT-LIB 2 standard with <code>prenex polymorphism</code> and implemented it as a new frontend in Alt-Ergo 2.2. This work has been published in the 2018 edition of the SMT-Workshop. An online version of the paper is <a href="https://hal.inria.fr/hal-01960203">available here</a>. Experimental results showed that polymorphism is really important for Alt-Ergo, as it allows to improve both resolution rate and resolution time (see Figure 5 in the paper for more details).</p>
<h2>Improved SAT solvers</h2>
<p>We also worked on improving SAT-solving in Alt-Ergo last year. The main direction towards this goal was to extend our CDCL-based SAT solver to mimic some desired behaviors of the native Tableaux-like SAT engine. Generally speaking, this allows a better management of the context during proof search, which prevents from overwhelming theories and instantiation engines with useless facts. A comparison of this solver with Alt-Ergo’s old Tableaux-like solver is also done in our SMT-Workshop paper.</p>
<h2>SMT-Comp and SMT-Workshop 2018</h2>
<p>As emphasized above, we published our work regarding polymorphic SMT2 and SAT solving in SMT-Workshop 2018. More generally, this was an occasion for us to write the first tool paper about Alt-Ergo, and to highlight the main features that make it different from other state-of-the-art SMT solvers like CVC4, Z3 or Yices.</p>
<p>Thanks to our new SMT2 frontend, we were able to participate to the SMT-Competition last year. Naturally, we selected categories that are close to “deductive program verification”, as Alt-Ergo is primarily tuned for formulas coming from this application domain.</p>
<p>Although Alt-Ergo <a href="http://smtcomp.sourceforge.net/2018/results-summary.shtml?v=1531410683">did not rank first</a>, it was a positive experience and this encourages us to go ahead. Note that Alt-Ergo’s brother, Ctrl-Ergo, was not far from winning <a href="http://smtcomp.sourceforge.net/2018/results-QF_LIA.shtml">the QF-LIA category</a> of the competition. This performance is partly due to the improvements in the CDCL SAT solver that were also integrated in Ctrl-Ergo.</p>
<h2>Alt-Ergo for Atelier-B</h2>
<p><a href="https://www.atelierb.eu/en/">Atelier-B</a> is a framework that allows to develop formally verified software using <a href="https://www.methode-b.com/en/b-method/">the B Method</a>. The framework rests on an automatic reasoner that allows to discharges thousands of mathematical formulas extracted from B models. If a formula is not discharged automatically, it is proved interactively. <a href="https://www.clearsy.com/en/">ClearSy</a> (the company behind development of Atelier-B) has recently added a new backend to produce verification conditions in Why3’s logic, in order to target more automatic provers and increase automation rate. For certifiability reasons, we extended Alt-Ergo with a new frontend that is able to directly parse these verification conditions without relying on Why3.</p>
<h2>Improved hash-consed data-structures</h2>
<p>As said above, Alt-Ergo makes a clear distinction between Boolean terms and Propositions. This distinction prevents us from doing some rewriting and simplifications, in particular on expressions involving <code>let-in</code> and <code>if-then-else</code> constructs. This is why we decided to merge <code>Term</code>, <code>Literal</code>, and <code>Formula</code> in a new <code>Expr</code> data-structure, and remove this distinction. This allowed us to implement some additional simplification steps, and we immediately noticed performance improvements, in particular on SMT2 benchmarks. For instance, Alt-Ergo 2.3 proves 19548 formulas of AUFLIRA category in ~350 minutes, while version 2.2 proves 19535 formulas in ~1450 minutes (time limit was set to 20 minutes per formula).</p>
<h2>Towards the integration of algebraic datatypes</h2>
<p>Last Autumn, we also started working on the integration of algebraic datatypes reasoning in Alt-Ergo. In this first iteration, we extended Alt-Ergo’s native language to be able to declare (mutually recursive) algebraic datatypes, to write expressions with patterns matching, to handle selectors, … We then extended the typechecker accordingly and implemented a (not that) basic theory reasoner. Of course, we also handle SMT2’s algebraic datatypes. Here is an example in Alt-Ergo’s native syntax:</p>
<pre><code class="language-OCaml">type ('a, 'b) t = A of {a_1 : 'a} | B of {b_11 : 'a ; b12 : 'b} | C | D | E
logic e : (int, real) t
logic n : int
axiom ax_n : n &gt;= 9
axiom ax_e:
e = A(n) or e = B(n*n, 0.) or e = E
goal g:
match e with
| A(u) -> u >= 8
| B (u,v) -> u >= 80 and v = 0.
| E -> true
| _ -> false
end
and 3 <= 2+2
</code></pre>
<h2>What is planned in 2019 and beyond: the Alt-Ergo’s Users’ Club is born!</h2>
<p>In 2018, we welcomed a lot of new engineers with a background in formal methods: Steven (De Oliveira) holds a PhD in formal verification from the Paris-Saclay University and the French Atomic Energy Commission (CEA). He has a master in cryptography and worked in the Frama-C team, developing open-source tools for verifying C programs. David (Declerck) obtained a PhD from Université Paris-Saclay in 2018, during which he extended the Cubicle model checker to support weak memory models and wrote a compiler from a subset of the x86 assembly language to Cubicle. Guillaume (Bury) holds a PhD from Université Sorbonne Paris Cité. He studied the integration of rewriting techniques inside SMT solvers. Albin (Coquereau) is working as a PhD student between OCamlPro, LRI and ENSTA, focusing on improving the Alt-Ergo SMT solver. Adrien is interested in verification of safety properties over software and embedded systems. He worked on higher-order functional program verification at the University of Tokyo, and on the Kind 2 model checker at the University of Iowa. All these people will consolidate the department of formal methods at OCamlPro, which will be beneficial for Alt-Ergo.</p>
<p>In 2019 we just launched the Alt-Ergo Users’ Club, in order to get closer to our users, collect their needs, and integrate them into the Alt-Ergo roadmap, but also to ensure sustainable funding for the development of the project. We are happy to announce the very first member of the Club is <a href="https://www.adacore.com">Adacore</a>, very soon to be followed by <a href="https://trust-in-soft.com">Trust-In-Soft</a> and <a href="http://www-list.cea.fr/en/">CEA List</a>. Thanks for your early support!</p>
<blockquote>
<p>Interested to join? Contact us: contact@ocamlpro.com</p>
</blockquote>
Optimisation du stockage dans Tezos : une branche de test sur Gitlab https://ocamlpro.com/blog/2019_02_05_fr_optimisation_du_stockage_dans_tezos_une_branche_de_test_sur_gitlab2019-02-05T08:12:13Z2019-02-05T08:12:13Z
Fabrice Le Fessant
Ce troisième article consacré à l’amélioration du stockage dans Tezos fait suite à l’annonce de la mise à disposition d’une image docker pour les beta testeurs souhaitant essayer notre système de stockage et garbage collector. Voir Improving Tezos Storage : Gitlab branch for testers...<p>Ce troisième article consacré à l’amélioration du stockage dans Tezos fait suite à l’annonce de la mise à disposition d’une image docker pour
les beta testeurs souhaitant essayer notre système de stockage et garbage collector.</p>
<p>Voir <a href="/2019/02/04/improving-tezos-storage-gitlab-branch-for-testers/">Improving Tezos Storage : Gitlab branch for testers</a></p>
Improving Tezos Storage : Gitlab branch for testershttps://ocamlpro.com/blog/2019_02_04_improving_tezos_storage_gitlab_branch_for_testers2019-02-04T08:12:13Z2019-02-04T08:12:13Z
Fabrice Le Fessant
This article is the third post of a series of posts on improving Tezos storage. In our previous post, we announced the availability of a docker image for beta testers, wanting to test our storage and garbage collector. Today, we are glad to announce that we rebased our code on the latest version of ...<p>This article is the third post of a series of posts on improving Tezos
storage. In <a href="http://ocamlpro.com/2019/01/30/improving-tezos-storage-update-and-beta-testing/">our previous
post</a>,
we announced the availability of a docker image for beta testers,
wanting to test our storage and garbage collector. Today, we are glad
to announce that we rebased our code on the latest version of
<code>mainnet-staging</code>, and pushed a branch <code>mainnet-staging-irontez</code> on our
<a href="https://gitlab.com/tzscan/tezos/commits/mainnet-staging-irontez">public Gitlab
repository</a>.</p>
<p>The only difference with the previous post is a change in the name of
the RPCs : <code>/storage/context/gc</code> will trigger a garbage collection
(and terminate the node afterwards) and <code>/storage/context/revert</code> will
migrate the database back to Irmin (and terminate the node
afterwards).</p>
<p>Enjoy and send us feedback !!</p>
<h1>Comments</h1>
<p>AppaDude (10 February 2019 at 15 h 12 min):</p>
<blockquote>
<p>I must be missing something. I compiled and issued the required rpc trigger:</p>
<p>/storage/context/gc with the command</p>
<p>~/tezos/tezos-client rpc get /storage/context/gc
But I just got an empty JSON response of {} and the size of the .tezos-node folder is unchanged. Any advice is much appreciated.
Thank you!</p>
</blockquote>
<p>Fabrice Le Fessant (10 February 2019 at 15 h 47 min):</p>
<blockquote>
<p>By default, garbage collection will keep 9 cycles of blocks (~36000 blocks). If you have fewer blocks, or if you are using Irontez on a former Tezos database, and fewer than 9 cycles have been stored in Irontez, nothing will happen. If you want to force a garbage collection, you should tell Irontez to keep fewer block (but more than 100, that’s the minimum that we enforce):</p>
<p>~/tezos/tezos-client rpc get ‘/storage/context/gc?keep=120’</p>
<p>should trigger a GC if the node has been running on Irontez for at least 2 hours.</p>
</blockquote>
<p>AppaDude (10 February 2019 at 16 h 04 min):</p>
<blockquote>
<p>I think it did work. I was confused because the total disk space for the .tezos-node folder remained unchanged. Upon closer inspection, I see these contents and sizes:</p>
<p>These are the contents of .tezos-node, can I safely delete context.backup?</p>
<p>4.0K config.json
269M context
75G context.backup
4.0K identity.json
4.0K lock
1.4M peers.json
5.4G store
4.0K version.json</p>
</blockquote>
<blockquote>
<p>Is it safe to delete context.backup if I do not plan to revert? (/storage/context/revert)</p>
</blockquote>
<p>Fabrice Le Fessant (10 February 2019 at 20 h 51 min):</p>
<blockquote>
<p>Yes, normally. Don’t forget it is still under beta-testing…</p>
<p>Note that <code>/storage/context/revert</code> works even if you remove <code>context.backup</code>.</p>
</blockquote>
<p>Jack (23 February 2019 at 0 h 24 min):</p>
<blockquote>
<p>Have there been any issues reported with missing endorsements or missing bakings with this patch? We have been using this gc version (https://gitlab.com/tezos/tezos/merge_requests/720) for the past month and ever since we switched we have been missing endorsements and missing bakings. The disk space savings is amazing, but if we keep missing ends/bakes, it’s going to hurt our reputation as a baking service.</p>
</blockquote>
<p>Fabrice Le Fessant (23 February 2019 at 6 h 58 min):</p>
<blockquote>
<p>Hi,</p>
<p>I am not sure what you are asking for. Are you using our version (https://gitlab.com/tzscan/tezos/commits/mainnet-staging-irontez), or the one on the Tezos repository ? Our version is very different, so if you are using the other one, you should contact them directly on the merge request. On our version, we got a report last week, and the branch has been fixed immediately (but not yet the docker images, should be done in the next days).</p>
</blockquote>
<p>Jack (25 February 2019 at 15 h 53 min):</p>
<blockquote>
<p>I was using the 720MR and experiencing issues with baking/endorsing. I understand that 720MR and IronTez are different. I was simply asking if your version has had any reports of baking/endorsing troubles.</p>
</blockquote>
<p>Jack (25 February 2019 at 15 h 51 min):</p>
<blockquote>
<p>Is there no way to convert a “standard node” to IronTez? I was running the official tezos-node, and my datadir is around 90G. I compiled IronTez and started it up on that same dir, then ran <code>rpc get /storage/context/gc</code> and nothing is happening. I thought this was supposed to convert my datadir to irontez? If not, what is the RPC to do this? Or must I start from scratch to be 100% irontez?</p>
</blockquote>
<p>Fabrice Le Fessant (25 February 2019 at 16 h 24 min):</p>
<blockquote>
<p>There are two ways to get a full Irontez DB:</p>
<ul>
<li>Start a node from scratch and wait for one or two days…
</li>
<li>Use an existing node, run Irontez on it for 2 hours, and then call <code>rpc get /storage/context/gc?keep=100</code> . 100 is the number of blocks to be kept. After 2 hours, the last 120 blocks should be stored in the IronTez DB, so the old DB will not be used anymore. Note that Irontez will not delete the old DB, just rename it. You should go there and remove the file to recover the disk space.
</li>
</ul>
</blockquote>
<p>Jack (27 February 2019 at 1 h 24 min):</p>
<blockquote>
<p>Where do we send feedback/get help? Email? Slack? Reddit?</p>
</blockquote>
<p>Banjo E. (3 March 2019 at 2 h 40 min):</p>
<blockquote>
<p>There is a major problem for bakers who want to use the irontez branch. After garbage collection, the baker application will not start because the baker requests a rpc call for the genesis block information. That genesis block information is gone after the garbage collection. Please address this isssue soon. Thank you!</p>
</blockquote>
<p>Fabrice Le Fessant (6 March 2019 at 21 h 44 min):</p>
<blockquote>
<p>I pushed a new branch with a tentative fix: https://gitlab.com/tzscan/tezos/tree/mainnet-staging-irontez-fix-genesis . Unfortunately, I could not test it (I am far away from work for two weeks), so feedback is really welcome, before pushing in the irontez branch.</p>
</blockquote>
Tezos et OCamlProhttps://ocamlpro.com/blog/2019_01_31_fr_tezos_et_ocamlpro2019-01-31T08:12:13Z2019-01-31T08:12:13Z
Fabrice Le Fessant
Tezos est aujourd’hui un projet open source, un réseau international développé par des équipes sur plus de cinq continents. Dans la genèse du projet, l’entreprise française OCamlPro, qui développe encore aujourd’hui de nombreux projets liés à Tezos (TZscan, Liquidity, etc.), a joué u...<p>Tezos est aujourd’hui un projet open source, un réseau international développé par des équipes sur plus de cinq continents. Dans la genèse du
projet, l’entreprise française OCamlPro, qui développe encore aujourd’hui de nombreux projets liés à Tezos (TZscan, Liquidity, etc.), a
joué un rôle particulièrement important. C’est en effet en son sein que des ingénieurs-chercheurs ont posé les premières pierres du code, en
étroite collaboration avec Arthur Breitman, l’architecte du projet, et DLS pendant plusieurs années. Nous nous réjouissons aujourd’hui de
l’essor qu’a pris le projet.</p>
<p>Arthur et OCamlPro
(publication conjointe)</p>
Improving Tezos Storage : update and beta-testinghttps://ocamlpro.com/blog/2019_01_30_improving_tezos_storage_update_and_beta_testing2019-01-30T08:12:13Z2019-01-30T08:12:13Z
Fabrice Le Fessant
In a previous post, we presented some work that we did to improve the quantity of storage used by the Tezos node. Our post generated a lot of comments, in which upcoming features such as garbage collection and pruning were introduced. It also motivated us to keep working on this (hot) topic, and we ...<p>In a <a href="http://ocamlpro.com/2019/01/15/improving-tezos-storage/">previous post</a>, we
presented some work that we did to improve the quantity of storage
used by the Tezos node. Our post generated a lot of comments, in which
upcoming features such as garbage collection and pruning were
introduced. It also motivated us to keep working on this (hot) topic,
and we present here our new results, and current state. Irontez3 is a
new version of our storage system, that we tested both on real traces
and real nodes. We implemented a garbage-collector for it, that is
triggered by an RPC on our node (we want the user to be able to choose
when it happens, especially for bakers who might risk losing a baking
slot), and automatically every 16 cycles in our traces.</p>
<p>In the following graph, we present the size of the context database
during a full trace execution (~278 000 blocks):</p>
<p><img src="/blog/assets/img/plot_sizes-2.png" alt="plot_size-2.png" /></p>
<p>There is definitely quite some improvement brought to the current
Tezos implementation based on Irmin+LMDB, that we reimplemented as
IronTez0. IronTez0 allows an IronTez node to read a database generated
by the current Tezos and switch to the IronTez3 database. At the
bottom of the graph, IronTez3 increases very slowly (about 7 GB at the
end), and the garbage-collector makes it even less expensive (about
2-3 GB at the end). Finally, we executed a trace where we switched
from IronTez0 to IronTez3 at block 225 000. The graph shows that,
after the switch, the size immediately grows much more slowly, and
finally, after a garbage collection, the storage is reduced to what it
would have been with IronTez3.</p>
<p>Now, let’s compare the speed of the different storages:</p>
<p><img src="/blog/assets/img/plot_times-2.png" alt="plot_times-2.png" /></p>
<p>The graph shows that IronTez3 is about 4-5 times faster than
Tezos/IronTez0. Garbage-collections have an obvious impact on the
speed, but clearly negligible compared to the current performance of
Tezos. On our computer used for the traces, a Xeon with an SSD disk,
the longest garbage collection takes between 1 and 2 minutes, even
when the database was about 40 GB at the beginning.</p>
<p>In the former post, we didn’t check the amount of memory used by our
storage system. It might be expected that the performance improvement
could be associated with a more costly use of memory… but such is not
the case :</p>
<p><img src="/blog/assets/img/plot_mem.png" alt="plot_mem.png" /></p>
<p>At the top of the graph is our IronTez0 implementation of the current
storage: it uses a little more memory than the current Tezos
implementation (about 6 GB), maybe because it shares data structures
with IronTez3, with fields that are only used by IronTez3 and could be
removed in a specialized version. IronTez3 and IronTez3 with garbage
collection are at the bottom, using about 2 GB of memory. It is
actually surprising that the cost of garbage collections is very
limited.</p>
<p>On our current running node, we get the following storage:</p>
<pre><code class="language-shell-session">$ du
1.4G ./context
4.9G ./store
6.3G .
</code></pre>
<p>Now, if we use our new RPC to revert the node to Irmin (taking a little less than 8 minutes on our computer), we get :</p>
<pre><code class="language-shell-session">$ du
14.3G ./context
4.9G ./store
19.2G .
</code></pre>
<h2>Beta-Testing with Docker</h2>
<p>If you are interested in these results, it is now possible to test our
node: we created a docker image, similar to the ones of Tezos. It is
available on Docker Hub (one image that works for both Mainnet and
Alphanet). Our script mainnet.sh (http://tzscan.io/irontez/mainnet.sh)
can be used similarly to the alphanet.sh script of Tezos to manage the
container. It can be run on an existing Tezos database, it will switch
it to IronTez3. Note that such a change is not irreversible, still it
might be a good idea to backup your Tezos node directory before, as
(1) migrating back might take some time, (2) this is a beta-testing
phase, meaning the code might still hide nasty bugs, and (3) the
official node might introduce a new incompatible format.</p>
<h2>New RPCS</h2>
<p>Both of these RPCs will make the node TERMINATE once they have
completed. You should restart the node afterwards.</p>
<p>The RPC <code>/ocp/storage/gc</code> : it triggers a garbage collection using the
RPC <code>/ocp/storage/gc</code> . By default, this RPC will keep only the
contexts from the last 9 cycles. It is possible to change this value
by using the ?keep argument, and specify another number of contexts to
keep (beware that if this value is too low, you might end up with a
non-working Tezos node, so we have set a minimum value of 100). No
garbage-collection will happen if the oldest context to keep was
stored in the Irmin database. The RPC <code>/ocp/storage/revert</code> : it
triggers a migration of the database fron Irontez3 back to Irmin. If
you have been using IronTez for a while, and want to go back to the
official node, this is the way. After calling this RPC, you should not
run IronTez again, otherwise, it will restart using the IronTez3
format, and you will need to revert again. This operation can take a
lot of time, depending on the quantity of data to move between the two
formats.</p>
<h2>Following Steps</h2>
<p>We are now working with the team at Nomadic Labs to include our work
in the public Tezos code base. We will inform you as soon as our Pull
Request is ready, for more testing ! If all testing and review goes
well, we hope it can be merged in the next release !</p>
<h1>Comments</h1>
<p>Jack (30 January 2019 at 15 h 30 min):</p>
<blockquote>
<p>Please release this as a MR on gitlab so those of us not using docker can start testing the code.</p>
</blockquote>
<p>Fabrice Le Fessant (10 February 2019 at 15 h 39 min):</p>
<blockquote>
<p>That was done: <a href="/2019/02/04/improving-tezos-storage-gitlab-branch-for-testers/">here</a></p>
</blockquote>
Tezos and OCamlProhttps://ocamlpro.com/blog/2019_01_29_tezos_and_ocamlpro2019-01-29T08:12:13Z2019-01-29T08:12:13Z
Arthur Breitman
A reflection on the new year… Today, Tezos is a global network and an open source project with developers spanning over five continents. In the inception of this project, the French company OCamlPro which, to this day, stills develops numerous projects around Tezos, played a particularly important...<p>A reflection on the new year… Today, Tezos is a global network and an
open source project with developers spanning over five continents. In
the inception of this project, the French company OCamlPro which, to
this day, stills develops numerous projects around Tezos, played a
particularly important role. Indeed, they were the first home of the
research engineers who laid down the cornerstone of the code base, in
tight collaboration with Arthur Breitman and the architect of the
project, and DLS. We take some time today to remember those early days
and celebrate the flourishing of this once small project.</p>
<p>(cross-post with Arthur Breitman, Founder of the Tezos project)</p>
opam 2.0.3 releasehttps://ocamlpro.com/blog/2019_01_28_opam_2.0.3_release2019-01-28T08:12:13Z2019-01-28T08:12:13Z
Raja Boujbel
Louis Gesbert
We are pleased to announce the release of opam 2.0.3. This new version contains some backported fixes: Fix manpage remaining $ (OPAMBESTEFFORT)
Fix OPAMROOTISOK handling
Regenerate missing environment file Installation instructions (unchanged): From binaries: run or download manually from the Github...<p>We are pleased to announce the release of <a href="https://github.com/ocaml/opam/releases/tag/2.0.3">opam 2.0.3</a>.</p>
<p>This new version contains some <a href="https://github.com/ocaml/opam/pull/3715">backported fixes</a>:</p>
<ul>
<li>Fix manpage remaining $ (OPAMBESTEFFORT)
</li>
<li>Fix OPAMROOTISOK handling
</li>
<li>Regenerate missing environment file
</li>
</ul>
<hr />
<p>Installation instructions (unchanged):</p>
<ol>
<li>From binaries: run
</li>
</ol>
<pre><code class="language-shell-session">sh <(curl -sL https://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh)
</code></pre>
<p>or download manually from <a href="https://github.com/ocaml/opam/releases/tag/2.0.3">the Github "Releases" page</a> to your PATH. In this case, don't forget to run <code>opam init --reinit -ni</code> to enable sandboxing if you had version 2.0.0~rc manually installed or to update you sandbox script.</p>
<ol start="2">
<li>From source, using opam:
</li>
</ol>
<pre><code class="language-shell-session">opam update; opam install opam-devel
</code></pre>
<p>(then copy the opam binary to your PATH as explained, and don't forget to run <code>opam init --reinit -ni</code> to enable sandboxing if you had version 2.0.0~rc manually installed or to update you sandbox script)</p>
<ol start="3">
<li>From source, manually: see the instructions in the <a href="https://github.com/ocaml/opam/tree/2.0.3#compiling-this-repo">README</a>.
</li>
</ol>
<p>We hope you enjoy this new major version, and remain open to <a href="https://github.com/ocaml/opam/issues">bug reports</a> and <a href="https://github.com/ocaml/opam/issues">suggestions</a>.</p>
<blockquote>
<p>NOTE: this article is cross-posted on <a href="https://opam.ocaml.org/blog/">opam.ocaml.org</a> and <a href="/blog">ocamlpro.com</a>.</p>
</blockquote>
Improving Tezos Storagehttps://ocamlpro.com/blog/2019_01_15_improving_tezos_storage2019-01-15T08:12:13Z2019-01-15T08:12:13Z
Fabrice Le Fessant
Running a Tezos node currently costs a lot of disk space, about 59 GB for the context database, the place where the node stores the states corresponding to every block in the blockchain, since the first one. Of course, this is going to decrease once garbage collection is integrated, i.e. removing ve...<p>Running a Tezos node currently costs a lot of disk space, about 59 GB
for the context database, the place where the node stores the states
corresponding to every block in the blockchain, since the first
one. Of course, this is going to decrease once garbage collection is
integrated, i.e. removing very old information, that is not used and
cannot change anymore
(<a href="https://gitlab.com/tezos/tezos/merge_requests/720#note_125296853">PR720</a>
by Thomas Gazagnaire, Tarides, some early tests show a decrease to
14GB ,but with no performance evaluation). As a side note, this is
different from pruning, i.e. transmitting only the last cycles for
“light” nodes
(<a href="https://gitlab.com/tezos/tezos/merge_requests/663">PR663</a> by Thomas
Blanc, OCamlPro). Anyway, as Tezos will be used more and more,
contexts will keep growing, and we need to keep decreasing the space
and performance cost of Tezos storage.</p>
<p>As one part of our activity at OCamlPro is to allow companies to
deploy their own private Tezos networks, we decided to experiment with
new storage layouts. We implemented two branches: our branch
<code>IronTez1</code> is based on a full LMDB database, as Tezos currently, but
with optimized storage representation ; our branch <code>IronTez2</code> is based
on a mixed database, with both LMDB and file storage.</p>
<p>To test these branches, we started a node from scratch, and recorded
all the accesses to the context database, to be able to replay it with
our new experimental nodes. The node took about 12 hours to
synchronize with the network, on which about 3 hours were used to
write and read in the context database. We then replayed the trace,
either only the writes or with both reads and writes.</p>
<p>Here are the results:</p>
<p><img src="/blog/assets/img/plot_sizes.png" alt="plot_sizes.png" /></p>
<p>The mixed storage is the most interesting: it uses half the storage of a standard Tezos node !</p>
<p><img src="/blog/assets/img/plot_times-1.png" alt="plot_times-1.png" /></p>
<p>Again, the mixed storage is the most efficient : even with reads and
writes, <code>IronTez2</code> is five time faster than the current Tezos storage.</p>
<p>Finally, here is a graph that shows the impact of the two attacks that
happened in November 2018, and how it can be mitigated by storage
improvement:</p>
<p><img src="/blog/assets/img/plot_diffs.png" alt="plot_diffs.png" /></p>
<p>The graph shows that, using mixed storage, it is possible to restore the storage growth of Tezos to what it was before the attack !</p>
<p>Interestingly, although these experiments have been done on full traces, our branches are completely backward-compatible : they could be used on an already existing database, to store the new contexts in our optimized format, while keeping the old data in the ancient format.</p>
<p>Of course, there is still a lot of work to do, before this work is finished. We think that there are still more optimizations that are possible, and we need to test our branches on running nodes for some time to get confidence (TzScan might be the first tester !), but this is a very encouraging work for the future of Tezos !</p>
opam 2.0.2 releasehttps://ocamlpro.com/blog/2018_12_12_opam_2.0.2_release2018-12-12T08:12:13Z2018-12-12T08:12:13Z
Raja Boujbel
Louis Gesbert
We are pleased to announce the release of opam 2.0.2. As sandbox scripts have been updated, don't forget to run opam init --reinit -ni to update yours. This new version contains mainly backported fixes: Doc:
update man page
add message for deprecated options
reinsert removed ones to print a deprecat...<p>We are pleased to announce the release of <a href="https://github.com/ocaml/opam/releases/tag/2.0.2">opam 2.0.2</a>.</p>
<p>As <strong>sandbox scripts</strong> have been updated, don't forget to run <code>opam init --reinit -ni</code> to update yours.</p>
<p>This new version contains mainly <a href="https://github.com/ocaml/opam/pull/3669">backported fixes</a>:</p>
<ul>
<li>Doc:
<ul>
<li>update man page
</li>
<li>add message for deprecated options
</li>
<li>reinsert removed ones to print a deprecated message instead of fail (e.g. <code>--alias-of</code>)
</li>
<li>deprecate <code>no-aspcud</code>
</li>
</ul>
</li>
<li>Pin:
<ul>
<li>on pinning, rebuild updated <code>pin-depends</code> packages reliably
</li>
<li>include descr & url files on pinning 1.2 opam files
</li>
</ul>
</li>
<li>Sandbox:
<ul>
<li>handle symlinks in bubblewrap for system directories such as <code>/bin</code> or <code>/lib</code> (<a href="https://github.com/ocaml/opam/pull/3661">#3661</a>). Fixes sandboxing on some distributions such as CentOS 7 and Arch Linux.
</li>
<li>allow use of unix domain sockets on macOS (<a href="https://github.com/ocaml/opam/issues/3659">#3659</a>)
</li>
<li>change one-line conditional to if statement which was incompatible with set -e
</li>
<li>make /var readonly instead of empty and rw
</li>
</ul>
</li>
<li>Path: resolve default opam root path
</li>
<li>System: suffix .out for read_command_output stdout files
</li>
<li>Locked: check consistency with opam file when reading lock file to suggest regeneration message
</li>
<li>Show: remove pin depends messages
</li>
<li>Cudf: Fix closure computation in the presence of cycles to have a complete graph if a cycle is present in the graph (typically <code>ocaml-base-compiler</code> ⇄ <code>ocaml</code>)
</li>
<li>List: Fix some cases of listing coinstallable packages
</li>
<li>Format upgrade: extract archived source files of version-pinned packages
</li>
<li>Core: add is_archive in OpamSystem and OpamFilename
</li>
<li>Init: don't fail if empty compiler given
</li>
<li>Lint: fix light_uninstall flag for error 52
</li>
<li>Build: partial port to dune
</li>
<li>Update cold compiler to 4.07.1
</li>
</ul>
<hr />
<p>Installation instructions (unchanged):</p>
<ol>
<li>From binaries: run
</li>
</ol>
<pre><code class="language-shell-session">sh <(curl -sL https://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh)
</code></pre>
<p>or download manually from <a href="https://github.com/ocaml/opam/releases/tag/2.0.2">the Github "Releases" page</a> to your PATH. In this case, don't forget to run <code>opam init --reinit -ni</code> to enable sandboxing if you had version 2.0.0~rc manually installed or to update your sandbox script.</p>
<ol start="2">
<li>From source, using opam:
</li>
</ol>
<pre><code class="language-shell-session">opam update; opam install opam-devel
</code></pre>
<p>(then copy the opam binary to your PATH as explained, and don't forget to run <code>opam init --reinit -ni</code> to enable sandboxing if you had version 2.0.0~rc manually installed or to update you sandbox script)</p>
<ol start="3">
<li>From source, manually: see the instructions in the <a href="https://github.com/ocaml/opam/tree/2.0.2#compiling-this-repo">README</a>.
</li>
</ol>
<p>We hope you enjoy this new minor version, and remain open to <a href="https://github.com/ocaml/opam/issues">bug reports</a> and <a href="https://github.com/ocaml/opam/issues">suggestions</a>.</p>
<blockquote>
<p>NOTE: this article is cross-posted on <a href="https://opam.ocaml.org/blog/">opam.ocaml.org</a> and <a href="/blog">ocamlpro.com</a>.</p>
</blockquote>
An Introduction to Tezos RPCs: Signing Operationshttps://ocamlpro.com/blog/2018_11_21_an_introduction_to_tezos_rpcs_signing_operations2018-11-21T08:12:13Z2018-11-21T08:12:13Z
Fabrice Le Fessant
In a previous blogpost, we presented the RPCs used by tezos-client to send a transfer operation to a tezos-node. We were left with two remaining questions: How to forge a binary operation, for signature
How to sign a binary operation In this post, we will reply to these questions. We are still assum...<p>In a <a href="http://ocamlpro.com/2018/11/15/an-introduction-to-tezos-rpcs-a-basic-wallet/">previous blogpost</a>,
we presented the RPCs used by tezos-client to send a transfer
operation to a tezos-node. We were left with two remaining questions:</p>
<ul>
<li>
<p>How to forge a binary operation, for signature</p>
</li>
<li>
<p>How to sign a binary operation</p>
</li>
</ul>
<p>In this post, we will reply to these questions. We are still assuming
a node running and waiting for RPCs on address 127.0.0.1:9731. Since
we will ask this node to forge a request, we really need to trust it,
as a malicious node could send a different binary transaction from the
one we sent him.</p>
<p>Let’s take back our first operation:</p>
<pre><code class="language-json">{
"branch": "BMHBtAaUv59LipV1czwZ5iQkxEktPJDE7A9sYXPkPeRzbBasNY8",
"contents": [
{ "kind": "transaction",
"source": "tz1KqTpEZ7Yob7QbPE4Hy4Wo8fHG8LhKxZSx",
"fee": "50000",
"counter": "3",
"gas_limit": "200",
"storage_limit": "0",
"amount": "100000000",
"destination": "tz1gjaF81ZRRvdzjobyfVNsAeSC6PScjfQwN"
} ]
}
</code></pre>
<p>So, we need to translate this operation into a binary format, more
amenable for signature. For that, we use a new RPC to forge
operations. Under Linux, we can use the tool <code>curl</code> to send the
request to the node:</p>
<pre><code class="language-shell-session">$ curl -v -X POST http://127.0.0.1:9731/chains/main/blocks/head/helpers/forge/operations -H "Content-type: application/json" --data '{
"branch": "BMHBtAaUv59LipV1czwZ5iQkxEktPJDE7A9sYXPkPeRzbBasNY8",
"contents": [
{ "kind": "transaction",
"source": "tz1KqTpEZ7Yob7QbPE4Hy4Wo8fHG8LhKxZSx",
"fee": "50000",
"counter": "3",
"gas_limit": "200",
"storage_limit": "0",
"amount": "100000000",
"destination": "tz1gjaF81ZRRvdzjobyfVNsAeSC6PScjfQwN"
} ]
}'
</code></pre>
<p>Note that we use a POST request (request with content), with a
<code>Content-type</code> header indicating that the content is in JSON format. We
get the following body in the reply :</p>
<pre><code class="language-json">"ce69c5713dac3537254e7be59759cf59c15abd530d10501ccf9028a5786314cf08000002298c03ed7d454a101eb7022bc95f7e5f41ac78d0860303c8010080c2d72f0000e7670f32038107a59a2b9cfefae36ea21f5aa63c00"
</code></pre>
<p>This is the binary representation of our operation, in hexadecimal
format, exactly what we were looking for to be able to include
operations on the blockchain. However, this representation is not yet
complete, since we also need the operation to be signed by the
manager.</p>
<p>To sign this operation, we will first use <code>tezos-client</code>. That’s
something that we can do if we want, for example, to sign an operation
offline, for better security. Let’s assume that we have saved the
content of the string (<code>ce69...3c00</code> without the quotes) in a file
<code>operation.hex</code>, we can ask <code>tezos-client</code> to sign it with:</p>
<pre><code class="language-shell-session">$ tezos-client --addr 127.0.0.1 --port 9731 sign bytes 0x03$(cat operation.hex) for bootstrap1
</code></pre>
<p>The <code>0x03$(cat operation.hex)</code> is the concatenation of the <code>0x03</code>
prefix and the hexa content of the <code>operation.hex</code>, which is equivalent
to <code>0x03ce69...3c00</code>. The prefix is used (1) to indicate that the
representation is hexadecimal (<code>0x</code>), and (2) that it should start with
<code>03</code>, which is a watermark for operations in Tezos.</p>
<p>We get the following reply in the console:</p>
<pre><code class="language-shell-session">Signature: edsigtkpiSSschcaCt9pUVrpNPf7TTcgvgDEDD6NCEHMy8NNQJCGnMfLZzYoQj74yLjo9wx6MPVV29CvVzgi7qEcEUok3k7AuMg
</code></pre>
<p>Wonderful, we have a signature, in <code>base58check</code> format ! We can use
this signature in the <code>run_operation</code> and <code>preapply</code> RPCs… but not in
the <code>injection</code> RPC, which requires a binary format. So, to inject the
operation, we need to convert to the hexadecimal version of the
signature. For that, we will use the <code>base58check</code> package of Python
(we could do it in OCaml, but then, we could just use <code>tezos-client</code>
all along, no ?):</p>
<pre><code class="language-shell-session">$ pip3 install base58check
$ python
>>>import base58check
>>>base58check.b58decode(b'edsigtkpiSSschcaCt9pUVrpNPf7TTcgvgDEDD6NCEHMy8NNQJCGnMfLZzYoQj74yLjo9wx6MPVV29CvVzgi7qEcEUok3k7AuMg').hex()
'09f5cd8612637e08251cae646a42e6eb8bea86ece5256cf777c52bc474b73ec476ee1d70e84c6ba21276d41bc212e4d878615f4a31323d39959e07539bc066b84174a8ff0de436e3a7'
</code></pre>
<p>All signatures in Tezos start with <code>09f5cd8612</code>, which is used to
generate the <code>edsig</code> prefix. Also, the last 4 bytes are used as a
checksum (<code>e436e3a7</code>). Thus, the signature itself is after this prefix
and before the checksum: <code>637e08251cae64...174a8ff0d</code>.</p>
<p>Finally, we just need to append the binary operation with the binary
signature for the injection, and put them into a string, and send that
to the server for injection. If we have stored the hexadecimal
representation of the signature in a file <code>signature.hex</code>, then we can
use :</p>
<pre><code class="language-shell-session">$ curl -v -H "Content-type: application/json" 'http://127.0.0.1:9731/injection/operation?chain=main' --data '"'$(cat operation.hex)$(cat signature.hex)'"'
</code></pre>
<p>and we receive the hash of this new operation:</p>
<pre><code class="language-json">"oo1iWZDczV8vw3XLunBPW6A4cjmdekYTVpRxRh77Fd1BVv4HV2R"
</code></pre>
<p>Again, we cheated a little, by using <code>tezos-client</code> to generate the
signature. Let’s try to do it in Python, too !</p>
<p>First, we will need the secret key of bootstrap1. We can export from
<code>tezos-client</code> to use it directly:</p>
<pre><code class="language-shell-session">$ tezos-client show address bootstrap1 -S
Hash: tz1KqTpEZ7Yob7QbPE4Hy4Wo8fHG8LhKxZSx
Public Key: edpkuBknW28nW72KG6RoHtYW7p12T6GKc7nAbwYX5m8Wd9sDVC9yav
Secret Key: unencrypted:edsk3gUfUPyBSfrS9CCgmCiQsTCHGkviBDusMxDJstFtojtc1zcpsh
</code></pre>
<p>The secret key is exported on the last line by using the <code>-S</code>
argument, and it usually starts with <code>edsk</code>. Again, it is in
<code>base58check</code>, so we can use the same trick to extract its binary
value:</p>
<pre><code class="language-shell-session">$ python3
>>> import base58check
>>> base58check.b58decode(b'edsk3gUfUPyBSfrS9CCgmCiQsTCHGkviBDusMxDJstFtojtc1zcpsh').hex()[8:72]
'8500c86780141917fcd8ac6a54a43a9eeda1aba9d263ce5dec5a1d0e5df1e598'
</code></pre>
<p>This time, we directly extracted the key, by removing the first 8 hexa
chars, and keeping only 64 hexa chars (using <code>[8:72]</code>), since the key
is 32-bytes long. Let’s suppose that we save this value in a file
<code>bootstrap1.hex</code>.</p>
<p>Now, we will use the following script to compute the signature:</p>
<pre><code class="language-python">import binascii
operation=binascii.unhexlify(open("operation.hex","rb").readline()[:-1])
seed = binascii.unhexlify(open("bootstrap1.hex","rb").readline()[:-1])
from pyblake2 import blake2b
h = blake2b(digest_size=32)
h.update(b'x03' + operation)
digest = h.digest()
import ed25519
sk = ed25519.SigningKey(seed)
sig = sk.sign(digest)
print(sig.hex())
</code></pre>
<p>The <code>binascii</code> module is used to read the files in hexadecimal (after
removing the newlines), to get the binary representation of the
operation and of the Ed25519 seed. Ed25519 is an elliptive curve used
in Tezos to manage <code>tz1</code> addresses, i.e. to sign data and check
signatures.</p>
<p>The <code>blake2b</code> module is used to hash the message, before
signature. Again, we add a watermark to the operation, i.e. <code>x03</code>,
before hashing. We also have to specify the size of the hash,
i.e. <code>digest_size=32</code>, because the Blake2b hashing function can generate
hashes with different sizes.</p>
<p>Finally, we use the ed25519 module to transform the seed
(private/secret key) into a signing key, and use it to sign the hash,
that we print in hexadecimal. We obtain:</p>
<pre><code class="language-json">637e08251cae646a42e6eb8bea86ece5256cf777c52bc474b73ec476ee1d70e84c6ba21276d41bc212e4d878615f4a31323d39959e07539bc066b84174a8ff0d
</code></pre>
<p>This result is exactly the same as what we got using tezos-client !</p>
<p><img src="/blog/assets/img/SignTransaction-791x1024.jpg" alt="SignTransaction-791x1024.jpg" /></p>
<p>We now have a complete wallet, i.e. the ability to create transactions
and sign them without tezos-client. Of course, there are several
limitations to this work: first, we have exposed the private key in
clear, which is usually not a very good idea for security; also, Tezos
supports three types of keys, <code>tz1</code> for Ed25519 keys, <code>tz2</code> for
Secp256k1 keys (same as Bitcoin/Ethereum) and <code>tz3</code> for P256 keys;
finally, a realistic wallet would probably use cryptographic chips, on
a mobile phone or an external device (Ledger, etc.).</p>
<h1>Comments</h1>
<p>Anthony (28 November 2018 at 2 h 01 min):</p>
<blockquote>
<p>Fabrice, you talk about signing the operation using tezos-client, which can then be used with the run_operation, however when . you talk about doing it in a script, it doesn’t include the edsig or checksum or converted back into a usable form for run_operations. Can you explain how this is done in a script?</p>
<p>Thanks
Anthony</p>
</blockquote>
<p>Fabrice Le Fessant (29 November 2018 at 15 h 07 min):</p>
<blockquote>
<p>You are right, <code>run_operation</code> needs an <code>edsig</code> signature, not the hexadecimal encoding. To generate the <code>edsig</code>, you just need to use the reverse operation of <code>base58check.b58decode</code>, i.e. <code>base58check.b58encode</code>, on the concatenation of 3 byte arrays:</p>
<p>1/ the 5-bytes prefix that will generate the initial <code>edsig</code> characters, i.e. <code>0x09f5cd8612</code> in hexadecimal
2/ the raw signature <code>s</code>
3/ the 4 initial bytes of a checksum: the checksum is computed as <code>sha256(sha256(s))</code></p>
</blockquote>
<p>Badalona (27 December 2018 at 13 h 13 min):</p>
<blockquote>
<p>Hi Fabrice.</p>
<p>I will aprreciate if you show the coding of step 3. My checksum is always wrong.</p>
<p>Thanks</p>
</blockquote>
<p>Alain (16 January 2019 at 16 h 26 min):</p>
<blockquote>
<p>The checksum is on prefix + s.
Here is a python3 script to do it:</p>
<p>./hex2edsig.py 637e08251cae646a42e6eb8bea86ece5256cf777c52bc474b73ec476ee1d70e84c6ba21276d41bc212e4d878615f4a31323d39959e07539bc066b84174a8ff0dedsigtkpiSSschcaCt9pUVrpNPf7TTcgvgDEDD6NCEHMy8NNQJCGnMfLZzYoQj74yLjo9wx6MPVV29CvVzgi7qEcEUok3k7AuMg</p>
</blockquote>
<pre><code class="language-python">from pyblake2 import blake2b
import hashlib
import base58check
import ed25519
import sys
def sha256 (x) :
return hashlib.sha256(x).digest()
def b58check (prefix, b) :
x = prefix + b
checksum = sha256(sha256(x))[0:4]
return base58check.b58encode(x + checksum)
edsig_prefix = bytes([9, 245, 205, 134, 18])
hexsig = sys.argv[1]
bytessig = bytes.fromhex(hexsig)
b58sig = b58check (edsig_prefix, bytessig)
print(b58sig.decode('ascii'))
</code></pre>
<p>Anthony (29 November 2018 at 21 h 49 min):</p>
<blockquote>
<p>Fabrice,
Thanks for the information would you be able to show the coding as you have done in your blog?
Thanks
Anthony</p>
</blockquote>
<p>Mark Robson (9 February 2020 at 23 h 51 min):</p>
<blockquote>
<p>Great information, but can the article be updated to include the things discussed in the comments? As I can’t see the private key of bootstrap1 I can’t replicate locally. Been going around in circles on that point</p>
</blockquote>
<p>stacey roberts (7 May 2020 at 13 h 53 min):</p>
<blockquote>
<p>Can you help me to clear about how tezos can support to build a fully decentralized supply chain eco system?</p>
</blockquote>
<p>leesadaisy (16 September 2020 at 10 h 25 min):</p>
<blockquote>
<p>Hi there! Thanks for sharing useful info. Keep up your work.</p>
</blockquote>
<p>Alice Jenifferze (17 September 2020 at 10 h 51 min):</p>
<blockquote>
<p>Thanks for sharing!</p>
</blockquote>
Introduction aux RPCs dans Tezos : exemple d’un portefeuille (wallet) simplehttps://ocamlpro.com/blog/2018_11_20_fr_introduction_aux_rpcs_dans_tezos_exemple_dun_portefeuille_wallet_simple2018-11-20T08:12:13Z2018-11-20T08:12:13Z
Fabrice Le Fessant
Dans cet article technique, nous introduisons brièvement les RPCs dans Tezos à travers un exemple simple montrant comment le client Tezos interagit avec le noeud lors d’une instruction de transfert. Les RPCs de Tezos sont des requêtes HTTP (GET ou POST) auxquelles les noeuds Tezos répondent da...<p>Dans cet article technique, nous introduisons brièvement les RPCs dans Tezos à travers un exemple simple montrant comment le client Tezos
interagit avec le noeud lors d’une instruction de transfert. Les RPCs de Tezos sont des requêtes HTTP (GET ou POST) auxquelles les noeuds Tezos
répondent dans un fichier au format JSON. Elles sont la seule façon pour les wallets d’interagir avec <a href="/2018/11/15/an-introduction-to-tezos-rpcs-a-basic-wallet/">Read more…</a></p>
An Introduction to Tezos RPCs: a Basic Wallethttps://ocamlpro.com/blog/2018_11_15_an-introduction_to_tezos_rpcs_a_basic_wallet2018-11-15T08:12:13Z2018-11-15T08:12:13Z
Fabrice Le Fessant
In this technical blog post, we will briefly introduce Tezos RPCs through a simple example: we will show how the tezos-client program interacts with the tezos-node during a transfer command. Tezos RPCs are HTTP queries (GET or POST) to which tezos-node replies in JSON format. They are the only way f...<p>In this technical blog post, we will briefly introduce Tezos RPCs
through a simple example: we will show how the <code>tezos-client</code> program
interacts with the <code>tezos-node</code> during a <code>transfer</code> command. Tezos RPCs
are HTTP queries (<code>GET</code> or <code>POST</code>) to which <code>tezos-node</code> replies in JSON
format. They are the only way for wallets to interact with the
node. However, given the large number of RPCs accepted by the node, it
is not always easy to understand which ones can be useful if you want
to write a wallet. So, here, we use <code>tezos-client</code> as a simple example,
that we will complete in another blog post for wallets that do not
have access to the Tezos Protocol OCaml code.</p>
<p>As for the basic setup, we run a sandboxed node locally on port 9731,
with two known addresses in its wallet, called bootstrap1 and
bootstrap2.</p>
<p>Here is the command we are going to trace during this example:</p>
<pre><code class="language-shell-session">tezos-client --addr 127.0.0.1 --port 9731 -l transfer 100 from bootstrap1 to bootstrap2
</code></pre>
<p>With this command, we send just 100 tezzies between the two accounts,
paying only for the default fees (0.05 tz).</p>
<p>We use the <code>-l</code> option to request <code>tezos-client</code> to log all the RPC
calls it uses on the standard error (the console).</p>
<p>The first query issued by <code>tezos-client</code> is:</p>
<pre><code>>>>>0: http://127.0.0.1:9731/chains/main/blocks/head/context/contracts/tz1KqTpEZ7Yob7QbPE4Hy4Wo8fHG8LhKxZSx/counter
<<<<0: 200 OK
"2"
</code></pre>
<p><code>tz1KqTpEZ7Yob7QbPE4Hy4Wo8fHG8LhKxZSx</code> is the Tezos address
corresponding to bootstrap1 the payer of the operation. In Tezos, the
payer is the address responsible for paying the fees and burn
(storage) of the transaction. In our case, it is also the source of
the transfer. Here, <code>tezos-client</code> requests the counter of the payer,
because all operations must have a different counter. This is an
important feature, here, it will prevent bootstrap2 from sending the
same operation over and over, emptying the account of bootstrap1.</p>
<p>Here, the counter is 2, probably because we already issued some former operations, so the next operation should have a counter of 3. The request is done on the block head of the main chain, an alias for the last block baked on the chain.</p>
<p>The next query is:</p>
<pre><code class="language-json">>>>>1: http://127.0.0.1:9731/chains/main/blocks/head/context/contracts/tz1KqTpEZ7Yob7QbPE4Hy4Wo8fHG8LhKxZSx/manager_key
<<<<1: 200 OK
{ "manager": "tz1KqTpEZ7Yob7QbPE4Hy4Wo8fHG8LhKxZSx",
"key": "edpkuBknW28nW72KG6RoHtYW7p12T6GKc7nAbwYX5m8Wd9sDVC9yav" }
</code></pre>
<p>This time, the client requests the key of the account manager. For a keyhash address (tz…), the manager is always itself, but this query is needed to know if the public key of the manager has been revealed. Here, the key field contains a public key, which means a revelation operation has already been published. Otherwise, the client would have had to also create this revelation operation prior to the transfer (or together, actually). The revelation is mandatory, because all the nodes need to know the public key of the manager to validate the signature of the transfer.</p>
<p>Let’s see the next query:</p>
<pre><code class="language-json">>>>>2: http://127.0.0.1:9731/monitor/bootstrapped
<<<<2: 200 OK
{ "block": "BLyypN89WuTQyLtExGP6PEuZiu5WFDxys3GTUf7Vz4KvgKcvo2E",
"timestamp": "2018-10-13T00:32:47Z" }
</code></pre>
<p>This time, the client checks whether the node it is using is well connected to the network. A node is bootstrapped if it has enough connections to other nodes, and its chain is synchronized with them. This step is needed to prevent the operation from being sent on an obsolete fork of the chain.</p>
<p>Now, the next query requests the current configuration of the network.</p>
<pre><code class="language-json">>>>>3: http://127.0.0.1:9731/chains/main/blocks/head/context/constants
<<<<3: 200 OK
{ "proof_of_work_nonce_size": 8,
"nonce_length": 32,
"max_revelations_per_block": 32,
"max_operation_data_length": 16384,
"preserved_cycles": 5,
"blocks_per_cycle": 4096,
"blocks_per_commitment": 32,
"blocks_per_roll_snapshot": 512,
"blocks_per_voting_period": 32768,
"time_between_blocks": [ "60", "75" ],
"endorsers_per_block": 32,
"hard_gas_limit_per_operation": "400000",
"hard_gas_limit_per_block": "4000000",
"proof_of_work_threshold": "-1",
"tokens_per_roll": "10000000000",
"michelson_maximum_type_size": 1000,
"seed_nonce_revelation_tip": "125000",
"origination_burn": "257000",
"block_security_deposit": "512000000",
"endorsement_security_deposit": "64000000",
"block_reward": "16000000",
"endorsement_reward": "2000000",
"cost_per_byte": "1000",
"hard_storage_limit_per_operation": "60000"
}
</code></pre>
<p>These constants may differ for different protocols, or different
networks. They are for example different on mainnet, alphanet and
zeronet. Among these constants, some of them are useful when issuing a
transaction: mainly <code>hard_gas_limit_per_operation</code> and
<code>hard_storage_limit_per_operation</code> . The first one is the maximum gas
that can be set for a transaction, and the second one is the maximum
storage that can be used. We don’t plan to use them directly, but we
will use them to compute an approximation of the gas and storage that
we will set for the transaction.</p>
<pre><code class="language-json">>>>>4: http://127.0.0.1:9731/chains/main/blocks/head/hash
<<<<4: 200 OK
"BLyypN89WuTQyLtExGP6PEuZiu5WFDxys3GTUf7Vz4KvgKcvo2E"
</code></pre>
<p>This query is a bit redundant with the <code>/monitor/bootstrapped</code> query,
which already returned the last block baked on the chain. Anyway, it
is useful if we are not working on the main chain.</p>
<p>The next query requests the chain_id of the main chain, which is
typically useful to verify that we know the format of operations for
this chain id:</p>
<pre><code class="language-json">>>>>5: http://127.0.0.1:9731/chains/main/chain_id
<<<<5: 200 OK
"NetXdQprcVkpaWU"
</code></pre>
<p>Finally, the client tries to simulate the transaction, using the
maximal gas and storage limits requested earlier. Since it is in
simulation mode, the transaction is only ran locally on the node, and
immediately backtracked. It is used to know if the transactions
executes successfully, and to know the gas and storage actually used
(to avoid paying fees for an erroneous transaction) :</p>
<pre><code class="language-json">>>>>6: http://127.0.0.1:9731/chains/main/blocks/head/helpers/scripts/run_operation
{ "branch": "BLyypN89WuTQyLtExGP6PEuZiu5WFDxys3GTUf7Vz4KvgKcvo2E",
"contents": [
{ "kind": "transaction",
"source": "tz1KqTpEZ7Yob7QbPE4Hy4Wo8fHG8LhKxZSx",
"fee": "50000",
"counter": "3",
"gas_limit": "400000",
"storage_limit": "60000",
"amount": "100000000",
"destination": "tz1gjaF81ZRRvdzjobyfVNsAeSC6PScjfQwN" }
],
"signature":
"edsigtXomBKi5CTRf5cjATJWSyaRvhfYNHqSUGrn4SdbYRcGwQrUGjzEfQDTuqHhuA8b2d8NarZjz8TRf65WkpQmo423BtomS8Q"
}
</code></pre>
<p>The operation is related to a branch, and you can see that the branch
field is here set to the hash of the last block head. The branch field
is used to prevent an operation from being executed on an alternative
head, and also for garbage collection: an operation can be inserted
only in one of the 64 blocks after the branch block, or it will be
deleted.</p>
<p>The result looks like this:</p>
<pre><code class="language-json"><<<<6: 200 OK
{ "contents": [
{ "kind": "transaction",
"source": "tz1KqTpEZ7Yob7QbPE4Hy4Wo8fHG8LhKxZSx",
"fee": "50000",
"counter": "3",
"gas_limit": "400000",
"storage_limit": "60000",
"amount": "100000000",
"destination": "tz1gjaF81ZRRvdzjobyfVNsAeSC6PScjfQwN",
"metadata": {
"balance_updates": [
{ "kind": "contract",
"contract": "tz1KqTpEZ7Yob7QbPE4Hy4Wo8fHG8LhKxZSx",
"change": "-50000" },
{ "kind": "freezer",
"category": "fees",
"delegate": "tz1Ke2h7sDdakHJQh8WX4Z372du1KChsksyU",
"level": 0,
"change": "50000" }
],
"operation_result":
{ "status": "applied",
"balance_updates": [
{ "kind": "contract",
"contract": "tz1KqTpEZ7Yob7QbPE4Hy4Wo8fHG8LhKxZSx",
"change": "-100000000" },
{ "kind": "contract",
"contract": "tz1gjaF81ZRRvdzjobyfVNsAeSC6PScjfQwN",
"change": "100000000" }
],
"consumed_gas": "100" } } }
]
}
</code></pre>
<p>Notice the consumed_gas field in the metadata section, that’s the gas
that we can expect the transaction to use on the real chain. Here,
there is no storage consumed, otherwise, a storage_size field would be
present. The returned status is applied, meaning that the transaction
could be successfully simulated by the node.</p>
<p>However, in the query, there was a field that we cannot easily infer:
it is the signature field. Indeed, the <code>tezos-client</code> knows how to
generate a signature for the transaction, knowing the public/private
key of the manager. How can we do that in our wallet ? We will explain
that in a next Tezos blog post.</p>
<p>Again, the <code>tezos-client</code> requests the last block head:</p>
<pre><code class="language-json">>>>>7: http://127.0.0.1:9731/chains/main/blocks/head/hash
<<<<7: 200 OK
"BLyypN89WuTQyLtExGP6PEuZiu5WFDxys3GTUf7Vz4KvgKcvo2E"
</code></pre>
<p>and the current chain id:</p>
<pre><code class="language-json">>>>>8: http://127.0.0.1:9731/chains/main/chain_id
<<<<8: 200 OK
"NetXdQprcVkpaWU"
</code></pre>
<p>The last simulation is a prevalidation of the transaction, with the
exact same parameters (gas and storage) with which it will be
submitted on the official blockchain:</p>
<pre><code class="language-json">>>>>9: http://127.0.0.1:9731/chains/main/blocks/head/helpers/preapply/operations
[ { "protocol": "PsYLVpVvgbLhAhoqAkMFUo6gudkJ9weNXhUYCiLDzcUpFpkk8Wt",
"branch": "BLyypN89WuTQyLtExGP6PEuZiu5WFDxys3GTUf7Vz4KvgKcvo2E",
"contents": [
{ "kind": "transaction",
"source": "tz1KqTpEZ7Yob7QbPE4Hy4Wo8fHG8LhKxZSx",
"fee": "50000",
"counter": "3",
"gas_limit": "200",
"storage_limit": "0",
"amount": "100000000",
"destination": "tz1gjaF81ZRRvdzjobyfVNsAeSC6PScjfQwN"
} ],
"signature": "edsigu5Cb8WEmUZzoeGSL3sbSuswNFZoqRPq5nXA18Pg4RHbhnFqshL2Rw5QJBM94UxdWntQjmY7W5MqBDMhugLgqrRAWHyH5hD"
} ]
</code></pre>
<p>Notice that, in this query, the gas_limit was set to
200. <code>tezos-client</code> is a bit conservative, adding 100 to the gas
returned by the first simulation. Indeed, the gas can be different
when the transaction is ran for inclusion, for example if a baker
introduced another transaction before that interferes with this one
(for example, a transaction that empties an account has an additionnal
gas cost of 50).</p>
<pre><code class="language-json"><<<<9: 200 OK
[ { "contents": [
{ "kind": "transaction",
"source": "tz1KqTpEZ7Yob7QbPE4Hy4Wo8fHG8LhKxZSx",
"fee": "50000",
"counter": "3",
"gas_limit": "200",
"storage_limit": "0",
"amount": "100000000",
"destination": "tz1gjaF81ZRRvdzjobyfVNsAeSC6PScjfQwN",
"metadata": {
"balance_updates": [
{ "kind": "contract",
"contract": "tz1KqTpEZ7Yob7QbPE4Hy4Wo8fHG8LhKxZSx",
"change": "-50000" },
{ "kind": "freezer",
"category": "fees",
"delegate": "tz1Ke2h7sDdakHJQh8WX4Z372du1KChsksyU",
"level": 0,
"change": "50000" } ],
"operation_result":
{ "status": "applied",
"balance_updates": [
{ "kind": "contract",
"contract": "tz1KqTpEZ7Yob7QbPE4Hy4Wo8fHG8LhKxZSx",
"change": "-100000000" },
{ "kind": "contract",
"contract": "tz1gjaF81ZRRvdzjobyfVNsAeSC6PScjfQwN",
"change": "100000000" } ],
"consumed_gas": "100" }
} } ],
"signature": "edsigu5Cb8WEmUZzoeGSL3sbSuswNFZoqRPq5nXA18Pg4RHbhnFqshL2Rw5QJBM94UxdWntQjmY7W5MqBDMhugLgqrRAWHyH5hD"
} ]
</code></pre>
<p>Again, the <code>tezos-client</code> had to sign the transaction with the manager
private key. This will be explained in a next blog post.</p>
<p>Since this prevalidation was successful, the client can now inject the
transaction on the block chain:</p>
<pre><code class="language-json">>>>>10: http://127.0.0.1:9731/injection/operation?chain=main
"a75719f568f22f279b42fa3ce595c5d4d0227cc8cf2af351a21e50d2ab71ab3208000002298c03ed7d454a101eb7022bc95f7e5f41ac78d0860303c8010080c2d72f0000e7670f32038107a59a2b9cfefae36ea21f5aa63c00eff5b0ce828237f10bab4042a891d89e951de2c5ad4a8fa72e9514ee63fec9694a772b563bcac8ae0d332d57f24eae7d4a6fad784a8436b6ba03d05bf72e4408"
<<<<10: 200 OK
"ooUo7nUZAbZKhTuX5NC999BuHs9TZBmtoTrCWT3jFnW7vMdN25U"
</code></pre>
<p>We can see that this request does not contain the JSON encoding of the
transaction, but a binary version (in hexadecimal format). This binary
version is what is stored in the blockchain, to decrease the size of
the storage. It contains both a binary encoding of the transaction,
and the signature of the transaction. <code>tezos-client</code> knows this binary
format, but if we want to create our own wallet, we will need a way to
compute it by ourselves.</p>
<p>The node replies with the operation hash of the injected operation:
the operation is now waiting for inclusion in the mempool of the node,
and will be forwarded to other nodes so that the next baker can
include it in the next block.</p>
<p>I hope you have now a better understanding of how a wallet can use
Tezos RPCs to issue a transaction. We now have two remaining
questions, for a next blog post:</p>
<p>How to generate the binary format of an operation, from the JSON
encoding ? How to sign an operation, so that we can include this
signature in the run, preapply and injection RPCs ?</p>
<p>If we can reply to these questions, we will also be able to sign
operations offline.</p>
<h1>Comments</h1>
<p>lizhihohng (5 May 2019 at 6 h 59 min):</p>
<blockquote>
<p>Before forge or sign a transaction, how to get a gas or gas limit, not a hard gas limit from contants?</p>
</blockquote>
<p>Juliane (16 November 2019 at 15 h 29 min):</p>
<blockquote>
<p>Good answer back in return of this difficulty with solid arguments and explaining all on the topic of that.</p>
</blockquote>
First Open-Source Release of TzScanhttps://ocamlpro.com/blog/2018_11_08_first_open_source_release_of_tzscan2018-11-08T08:12:13Z2018-11-08T08:12:13Z
Fabrice Le Fessant
In October 2017, after the Tezos ICO, OCamlPro started to work on a block explorer for Tezos. For us, it was the most important software that we could contribute to the community, after the node itself, of course. We used it internally to monitor the Tezos alphanet, until its official public release...<p>In October 2017, after the Tezos ICO, OCamlPro started to work on a
block explorer for Tezos. For us, it was the most important software
that we could contribute to the community, after the node itself, of
course. We used it internally to monitor the Tezos alphanet, until its
official public release in February 2018, as
<a href="https://tzscan.io">TzScan</a>. One of TzScan main goals was to make the
complex DPOS consensus algorithm of Tezos easier to understand, to
follow, especially for bakers who will contribute to it. Since its
creation, we have been improving it every day, rushing for the Betanet
in June 2018, and still now, monitoring all the Tezos networks,
Mainnet, Alphanet and Zeronet.</p>
<p>So we are pleased today to announce the first release of TzScan OS, the open-source version of TzScan!</p>
<ul>
<li>
<p>The sources are available on Gitlab:
<a href="https://gitlab.com/tzscan/tzscan">https://gitlab.com/tzscan/tzscan</a></p>
</li>
<li>
<p>The code, mostly OCaml, is distributed under <a href="https://www.gnu.org/licenses/gpl-3.0.en.html">GNU GPL
v3</a>.</p>
</li>
</ul>
<p>The project contains:</p>
<p><img src="/blog/assets/img/TzScanOS.png" alt="TZScan architecture schema" /></p>
<ul>
<li>
<p>The blockchain crawler, used to monitor the blockchain, and fill a PostgreSQL database</p>
</li>
<li>
<p>The web interface, requesting information using a REST API</p>
</li>
<li>
<p>The API server, using the PostgreSQL database to reply to API requests</p>
</li>
</ul>
<p>It can be used in two different modes:</p>
<ul>
<li>
<p>Remote Use: if you are not running a Tezos node, you might want to
only run the web interface, using the official TzScan API server</p>
</li>
<li>
<p>Local Use: if you are running a Tezos node, you can use the crawler
and the API server to serve information on your node, to a locally
running web interface</p>
</li>
</ul>
<h2>Contribute</h2>
<p>If you are interested in contributing to TzScan OS, a first step could
be to translate TzScan in your language : check the file
<a href="https://gitlab.com/tzscan/tzscan/blob/master/static/lang-en.json">lang-en.json</a>
for a list of strings to translate, and
<a href="https://gitlab.com/tzscan/tzscan/blob/master/static/lang-fr.json">lang-fr.json</a>
for a partial translation!</p>
<h2>OCamlPro’s services around TzScan</h2>
<p>TzScan OS can be used to monitor private/enterprise deployments of Tezos. OCamlPro is available to help and support such deployments.</p>
<h2>Acknowledgments</h2>
<p>We are thankful to the Tezos Foundation and Ryan Jesperson for their support!</p>
<p>All feedback is welcome!</p>
Liquidity Tutorial: A Game with an Oracle for Random Numbers https://ocamlpro.com/blog/2018_11_06_liquidity_tutorial_a_game_with_an_oracle_for_random_numbers2018-11-06T08:12:13Z2018-11-06T08:12:13Z
Alain Mebsout
A Game with an oracle In this small tutorial, we will see how to write a chance game on the Tezos blockchain with Liquidity and a small external oracle which provides random numbers. Principle of the game Rules of the game are handled by a smart contract on the Tezos blockchain. When a player decide...<h1>A Game with an oracle</h1>
<p>In this small tutorial, we will see how to write a chance game on the Tezos blockchain with Liquidity and a small external oracle which provides random numbers.</p>
<h2>Principle of the game</h2>
<p>Rules of the game are handled by a smart contract on the Tezos blockchain.</p>
<p>When a player decides to start a game, she must start by making a transaction (<em>i.e.</em> a call) to the game smart contract with a number parameter (let's call it <code>n</code>) between 0 and 100 (inclusively). The amount that is sent with this transaction constitute her bet <code>b</code>.</p>
<p>A random number <code>r</code> is then chosen by the oracle and the outcome of the game is decided by the smart contract.</p>
<ul>
<li>The player <strong>loses</strong> if her number <code>n</code> is <em>greater</em> than <code>r</code>. In this case, she forfeits her bet amount and the game smart contract is resets (the money stays on the game smart contract).- The player <strong>wins</strong> if her number <code>n</code> is <em>smaller or equal</em> to <code>r</code>. In this case, she gets back her initial bet <code>b</code> plus a reward which is proportional to her bet and her chosen number <code>b * n / 100</code>. This means that a higher number <code>n</code>, while being a riskier choice (the following random number must be greater), yields a greater reward. The edge cases being <code>n = 0</code> is an always winning input but the reward is always null, and <code>n = 100</code> wins only if the random number is also <code>100</code> but the player doubles her bet.
</li>
</ul>
<h2>Architecture of the DApp</h2>
<p>Everything that happens on the blockchain is deterministic and reproducible which means that smart contracts cannot generate random numbers securely <sup>1</sup> .</p>
<p>The following smart contract works in this manner. Once a user starts a game, the smart contract is put in a state where it awaits a random number from a trusted off-chain source. This trusted source is our random generator oracle. The oracle monitors the blockchain and generates and sends a random number to the smart contract once it detects that it is waiting for one.</p>
<p><img src="/blog/assets/img/draw_game_arch.jpg" alt="" /></p>
<p>Because the oracle waits for a <code>play</code> transaction to be included in a block and sends the random number in a subsequent block, this means that a game round lasts at least two blocks <sup>2</sup> .</p>
<p>This technicality forces us to split our smart contract into two distinct entry points:</p>
<ul>
<li>A first entry point <code>play</code> is called by a player who wants to start a game (it cannot be called twice). The code of this entry point saves the game parameters in the smart contract storage and stops execution (awaiting a random number).- A second entry point <code>finish</code>, which can only be called by the oracle, accepts random numbers as parameter. The code of this entry point computes the outcome of the current game based on the game parameters and the random number, and then proceeds accordingly. At the end of <code>finish</code> the contract is reset and a new game can be started.
</li>
</ul>
<h2>The Game Smart Contract</h2>
<p>The smart contract game manipulates a storage of the following type:</p>
<pre><code class="language-ocaml">type game = {
number : nat;
bet : tez;
player : key_hash;
}
type storage = { game : game option; oracle_id : address; }
</code></pre>
<p>The storage contains the address of the oracle, <code>oracle_id</code>. It will only accept transactions coming from this address (<em>i.e.</em> that are signed by the corresponding private key). It also contains an optional value <code>game</code> that indicates if a game is being played or not.</p>
<p>A game consists in three values, stored in a record:</p>
<ul>
<li><code>number</code> is the number chosen by the player.- <code>bet</code> is the amount that was sent with the first transaction by the player. It constitute the bet amount.- <code>player</code> is the key hash (tz1...) on which the player who made the bet wishes to be payed in the event of a win.
</li>
</ul>
<p>We also give an initializer function that can be used to deploy the contract with an initial value. It takes as argument the address of the oracle, which cannot be changed later on.</p>
<pre><code class="language-ocaml">let%init storage (oracle_id : address) =
{ game = (None : game option); oracle_id }
</code></pre>
<h3>The <code>play</code> entry point</h3>
<p>The first entry point, <code>play</code> takes as argument a pair composed of: - a natural number, which is the number chosen by the player - and a key hash, which is the address on which a player wishes to be payed as well as the current storage of the smart contract.</p>
<pre><code class="language-ocaml">let%entry play (number : nat) storage = ...
</code></pre>
<p>The first thing this contract does is validate the inputs:</p>
<ul>
<li>Ensure that the number is a valid choice, <em>i.e.</em> is between 0 and 100 (natural numbers are always greater or equal to 0).
</li>
</ul>
<pre><code class="language-ocaml">if number > 100p then failwith "number must be <= 100";
</code></pre>
<ul>
<li>Ensure that the contract has enough funds to pay the player in case she wins. The highest paying bet is to play <code>100</code> which means that the user gets payed twice its original bet amount. At this point of the execution, the balance of the contract is already credited with the bet amount, so this check comes to ensuring that the balance is greater than twice the bet.
</li>
</ul>
<pre><code class="language-ocaml">if 2p * Current.amount () > Current.balance () then
failwith "I don't have enough money for this bet";
</code></pre>
<ul>
<li>Ensure that no other game is currently being played so that a previous game is not erased.
</li>
</ul>
<pre><code class="language-ocaml">match storage.game with
| Some</span> g ->
failwith ("Game already started with", g)
| None ->
(* Actual code of entry point *)
</code></pre>
<p>The rest of the code for this entry point consist in simply creating a new <code>game</code> record <code>{ number; bet; player }</code> and saving it to the smart contract's storage. This entry point always returns an empty list of operations because it does not make any contract calls or transfers.</p>
<pre><code class="language-ocaml">let bet = Current.amount () in
let storage = storage.game <- Some { number; bet; player } in
(([] : operation list), storage)
</code></pre>
<p>The new storage is returned and the execution stops at this point, waiting for someone (the oracle) to call the <code>finish</code> entry point.</p>
<h3>The <code>finish</code> entry point</h3>
<p>The second entry point, <code>finish</code> takes as argument a natural number parameter, which is the random number generated by the oracle, as well as the current storage of the smart contract.</p>
<pre><code class="language-ocaml">let%entry finish (random_number : nat) storage = ...
</code></pre>
<p>The random number can be any natural number (these are mathematically unbounded natural numbers) so we must make sure it is between 0 and 100 before proceeding. Instead of rejecting too big random numbers, we simply (Euclidean) divide it by 101 and keep the remainder, which is between 0 and 100. The oracle already generates random numbers between 0 and 100 so this operation will do nothing but is interesting to keep if we want to replace the random generator one day.</p>
<pre><code class="language-ocaml">let random_number = match random_number / 101p with
| None -> failwith ()
| Some (_, r) -> r in
</code></pre>
<p>Smart contracts are public objects on the Tezos blockchain so anyone can decide to call them. This means that permissions must be handled by the logic of the smart contract itself. In particular, we don't want <code>finish</code> to be callable by anyone, otherwise it would mean that the player could choose its own random number. Here we make sure that the call comes from the oracle.</p>
<pre><code class="language-ocaml">if Current.sender () <> storage.oracle_id then
failwith ("Random numbers cannot be generated");
</code></pre>
<p>We must also make sure that a game is currently being played otherwise this random number is quite useless.</p>
<pre><code class="language-ocaml">match storage.game with
| None -> failwith "No game already started"
| Some game -> ...
</code></pre>
<p>The rest of the code in the entry point decides if the player won or lost, and generates the corresponding operations accordingly.</p>
<pre><code class="language-ocaml">if random_number < game.number then
(* Lose *)
([] : operation list)
</code></pre>
<p>If the random number is smaller that the chosen number, the player lost. In this case no operation is generated and the money is kept by the smart contract.</p>
<pre><code class="language-ocaml">else
(* Win *)
let gain = match (game.bet * game.number / 100p) with
| None -> 0tz
| Some (g, _) -> g in
let reimbursed = game.bet + gain in
[ Account.transfer ~dest:game.player ~amount:reimbursed ]
</code></pre>
<p>Otherwise, if the random number is greater or equal to the previously chosen number, then the player won. We compute her gain and the reimbursement value (which is her original bet + her gain) and generate a transfer operation with this amount.</p>
<pre><code class="language-ocaml">let storage = storage.game <- (None : game option) in
(ops, storage)
</code></pre>
<p>Finally, the storage of the smart contract is reset, meaning that the current game is erased. The list of generated operations and the reset storage is returned.</p>
<h3>A safety entry point: <code>fund</code></h3>
<p>At anytime we authorize anyone (most likely the manager of the contract) to add funds to the contract's balance. This allows new players to participate in the game even if the contract has been depleted, by simply adding more funds to it.</p>
<pre><code class="language-ocaml">let%entry fund _ storage =
([] : operation list), storage
</code></pre>
<p>This code does nothing, excepted accepting transfers with amounts.</p>
<h3>Full Liquidity Code of the Game Smart Contract</h3>
<pre><code class="language-ocaml">[%%version 0.403]
type game = {
number : nat;
bet : tez;
player : key_hash;
}
type storage = {
game : game option;
oracle_id : address;
}
let%init storage (oracle_id : address) =
{ game = (None : game option); oracle_id }
(* Start a new game *)
let%entry play ((number : nat), (player : key_hash)) storage =
if number > 100p then failwith "number must be <= 100";
if Current.amount () = 0tz then failwith "bet cannot be 0tz";
if 2p * Current.amount () > Current.balance () then
failwith "I don't have enough money for this bet";
match storage.game with
| Some g ->
failwith ("Game already started with", g)
| None ->
let bet = Current.amount () in
let storage = storage.game <- Some { number; bet; player } in
(([] : operation list), storage)
(* Receive a random number from the oracle and compute outcome of the
game *)
let%entry finish (random_number : nat) storage =
let random_number = match random_number / 101p with
| None -> failwith ()
| Some (_, r) -> r in
if Current.sender () <> storage.oracle_id then
failwith ("Random numbers cannot be generated");
match storage.game with
| None -> failwith "No game already started"
| Some game ->
let ops =
if random_number < game.number then
(* Lose *)
([] : operation list)
else
(* Win *)
let gain = match (game.bet * game.number / 100p) with
| None -> 0tz
| Some (g, _) -> g in
let reimbursed = game.bet + gain in
[ Account.transfer ~dest:game.player ~amount:reimbursed ]
in
let storage = storage.game <- (None : game option) in
(ops, storage)
(* accept funds *)
let%entry fund _ storage =
([] : operation list), storage
</code></pre>
<h2>The Oracle</h2>
<p>The oracle can be implemented using <a href="http://tezos.gitlab.io/mainnet/api/rpc.html">Tezos RPCs</a> on a running Tezos node. The principle of the oracle is the following:</p>
<ul>
<li>Monitor new blocks in the chain.
</li>
<li>For each new block, look if it includes <strong>successful</strong> transactions whose <em>destination</em> is the <em>game smart contract</em>.
</li>
<li>Look at the parameters of the transaction to see if it is a call to either <code>play</code>, <code>finish</code> or <code>fund</code>.
</li>
<li>If it is a successful call to <code>play</code>, then we know that the smart contract is awaiting a random number.
</li>
<li>Generate a random number between 0 and 100 and make a call to the game smart contract with the appropriate private key (the transaction can be signed by a Ledger plugged to the oracle server for instance).
</li>
<li>Wait a small amount of time depending on blocks intervals for confirmation.
</li>
<li>Loop.
</li>
</ul>
<p>These can be implemented with the following RPCs:</p>
<ul>
<li>Monitoring blocks: <code>/chains/main/blocks?[length=<int>]</code><a href="https://tezos.gitlab.io/mainnet/api/rpc.html#get-chains-chain-id-blocks">https://tezos.gitlab.io/mainnet/api/rpc.html#get-chains-chain-id-blocks</a>
</li>
<li>Listing operations in blocks: <code>/chains/main/blocks/<block_id>/operations/3</code><a href="https://tezos.gitlab.io/mainnet/api/rpc.html#get-block-id-operations-list-offset">https://tezos.gitlab.io/mainnet/api/rpc.html#get-block-id-operations-list-offset</a>
</li>
<li>Getting the storage of a contract: <code>/chains/main/blocks/<block_id>/context/contracts/<contract_id>/storage</code><a href="https://tezos.gitlab.io/mainnet/api/rpc.html#get-block-id-context-contracts-contract-id-storage">https://tezos.gitlab.io/mainnet/api/rpc.html#get-block-id-context-contracts-contract-id-storage</a>
</li>
<li>Making transactions or contract calls:
<ul>
<li>Either call the <code>tezos-client</code> binary (easiest if running on a server).
</li>
<li>Call the <code>liquidity file.liq --call ...</code> binary (private key must be in plain text so it is not recommended for production servers).
</li>
</ul>
</li>
</ul>
<p>An implementation of a random number Oracle in OCaml (which uses the liquidity client to make transactions) can be found in this repository: <a href="https://github.com/OCamlPro/liq_game/blob/master/src/crawler.ml">https://github.com/OCamlPro/liq_game/blob/master/src/crawler.ml</a>.</p>
<h3>Try a version on the mainnet</h3>
<p>This contract is deployed on the Tezos mainnet at the following address:<a href="https://tzscan.io/KT1GgUJwMQoFayRYNwamRAYCvHBLzgorLoGo">KT1GgUJwMQoFayRYNwamRAYCvHBLzgorLoGo</a>, with the minor difference that the contract refunds 1 μtz if the player loses to give some sort of feedback. You can try your luck by sending transactions (with a non zero amount) with a parameter of the form <code>Left (Pair 99 &quot;tz1LWub69XbTxdatJnBkm7caDQoybSgW4T3s&quot;)</code> where <code>99</code> is the number you want to play and <code>tz1LWub69XbTxdatJnBkm7caDQoybSgW4T3s</code> is your refund address. You can do so by using either a wallet that supports passing parameters with transactions (like Tezbox) or the command line Tezos client:</p>
<pre><code>tezos-client transfer 10 from my_account to KT1GgUJwMQoFayRYNwamRAYCvHBLzgorLoGo --fee 0 --arg 'Left (Pair 50 "tz1LWub69XbTxdatJnBkm7caDQoybSgW4T3s")'
</code></pre>
<h2>Remarks</h2>
<ul>
<li>In this game, the oracle must be trusted and so it can cheat. To mitigate this drawback, the oracle can be used as a random number generator for several games, if random values are stored in an intermediate contract.
</li>
<li>If the oracle looks for events in the last baked block (head), then it is possible that the current chain will be discarded and that the random number transaction appears in another chain. In this case, the player that sees this happen can play another game with a chosen number if he sees the random number in the mempool. In practice, the oracle operation is created only on the branch where the first player started, so that this operation cannot be put on another branch, removing any risk of attack.
</li>
</ul>
<p><strong>Footnotes</strong></p>
<ul>
<li>
<p>Some contracts on Ethereum use block hashes as sources of randomness but these are easily manipulated by miners so they are not safe to use. There are also ways to have participants contribute parts of a random number with enforceable commitments <a href="https://github.com/randao/randao">https://github.com/randao/randao</a>.</p>
</li>
<li>
<p>The random number could technically be sent in the same block by monitoring the mempool but it is not a good idea because the miner could reorder the transactions which will make both of them fail, or worse she could replace her bet accordingly once she sees a random number in her mempool.</p>
</li>
</ul>
<hr class="featurette-divider"/>
<p><strong>Alain Mebsout</strong>: Alain is a senior engineer at OCamlPro. Alain was involved in Tezos early in 2017, participating in the design of the ICO infrastructure and in particular, the Bitcoin and Ethereum smart contracts. Since then, Alain has been developing the Liquidity language, compiler and online editor, and has started working on the verification of Liquidity smart contracts. Alain also contributed some code in the Tezos node to improve Michelson. Alain holds a PhD in Computer Science on formal verification of programs.</p>
<h1>Comments</h1>
<p>Luiz Milfont (14 December 2018 at 17 h 21 min):</p>
<blockquote>
<p>Hello Mr. Alain Mebsout. My name is Milfont and I am the author of TezosJ SDK library, that allows to interact with Tezos blockchain through Java programming language.I did’t know this game before and got interested. I wonder if you would like me to create an Android version of your game, that would be an Android APP that would create a wallet automatically for the player and then he would pull a jackpot handle, sending the transaction with the parameters to your smart contract. I would like to know if you agree with this, and allow me to do it, using your already deployed game. Thanks in advance. Milfont. Twitter: @luizmilfont</p>
</blockquote>
<p>michsell (1 October 2019 at 15 h 29 min):</p>
<blockquote>
<p>Hello Alain,</p>
<p>I just played the game you designed, the problem is I cannot get any feedback even that 1utz for losing the game. Is the game retired? If so, can anyone help to remove it from tzscan dapps page: https://tzscan.io/dapps. Also, by any chance I may get the tezzies back…</p>
<p>Many thanks!
Best regards,
Michshell</p>
</blockquote>
opam 2.0.1 is out!https://ocamlpro.com/blog/2018_10_24_opam_2.0.1_is_out2018-10-24T08:12:13Z2018-10-24T08:12:13Z
Raja Boujbel
Louis Gesbert
We are pleased to announce the release of opam 2.0.1. This new version contains mainly backported fixes, some platform-specific: Cold boot for MacOS/CentOS/Alpine
Install checksum validation on MacOS
Archive extraction for OpenBSD now defaults to using gtar
Fix compilation of mccs on MacOS and Nix p...<p>We are pleased to announce the release of <a href="https://github.com/ocaml/opam/releases/tag/2.0.1">opam 2.0.1</a>.</p>
<p>This new version contains mainly <a href="https://github.com/ocaml/opam/pull/3560">backported fixes</a>, some platform-specific:</p>
<ul>
<li>Cold boot for MacOS/CentOS/Alpine
</li>
<li>Install checksum validation on MacOS
</li>
<li>Archive extraction for OpenBSD now defaults to using <code>gtar</code>
</li>
<li>Fix compilation of mccs on MacOS and Nix platforms
</li>
<li>Do not use GNU-sed specific features in the release Makefile, to fix build on OpenBSD/FreeBSD
</li>
<li>Cleaning to enable reproducible builds
</li>
<li>Update configure scripts
</li>
</ul>
<p>And some opam specific:</p>
<ul>
<li>git: fix git fetch by sha1 for git < 2.14
</li>
<li>linting: add <code>test</code> variable warning and empty description error
</li>
<li>upgrade: convert pinned but not installed opam files
</li>
<li>error reporting: more comprehensible error message for tar extraction, and upgrade of git-url compilers
</li>
<li>opam show: upgrade given local files
</li>
<li>list: as opam 2.0.0 <code>list</code> doesn't return non-zero code if list is empty, add <code>--silent</code> option for a silent output and returns 1 if list is empty
</li>
</ul>
<hr />
<p>Installation instructions (unchanged):</p>
<ol>
<li>From binaries: run
</li>
</ol>
<pre><code class="language-shell-session">sh <(curl -sL https://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh)
</code></pre>
<p>or download manually from <a href="https://github.com/ocaml/opam/releases/tag/2.0.1">the Github "Releases" page</a> to your PATH. In this case, don't forget to run <code>opam init --reinit -ni</code> to enable sandboxing if you had version 2.0.0~rc manually installed.</p>
<ol start="2">
<li>From source, using opam:
</li>
</ol>
<pre><code class="language-shell-session">opam update; opam install opam-devel
</code></pre>
<p>(then copy the opam binary to your PATH as explained, and don't forget to run <code>opam init --reinit -ni</code> to enable sandboxing if you had version 2.0.0~rc manually installed)</p>
<ol start="3">
<li>From source, manually: see the instructions in the <a href="https://github.com/ocaml/opam/tree/2.0.1#compiling-this-repo">README</a>.
</li>
</ol>
<p>We hope you enjoy this new major version, and remain open to <a href="https://github.com/ocaml/opam/issues">bug reports</a> and <a href="https://github.com/ocaml/opam/issues">suggestions</a>.</p>
<blockquote>
<p>NOTE: this article is cross-posted on <a href="https://opam.ocaml.org/blog/">opam.ocaml.org</a> and <a href="/blog">ocamlpro.com</a>.</p>
</blockquote>
OCamlPro’s TzScan grant proposal accepted by the Tezos Foundation – joint press releasehttps://ocamlpro.com/blog/2018_10_17_ocamlpros_tzscan_grant_proposal_accepted_by_the_tezos_foundation_joint_press_release2018-10-17T08:12:13Z2018-10-17T08:12:13Z
Muriel
Tezos Foundation and OCamlPro joint press release - October 17, 2018 We are pleased to announce that the Tezos Foundation has issued a grant to OCamlPro to support its work on TzScan, a block explorer for the Tezos blockchain that will be made open-source. OCamlPro is a French company and R&D lab, f...<h2>Tezos Foundation and OCamlPro joint press release - October 17, 2018</h2>
<p>We are pleased to announce that the Tezos Foundation has issued a grant to OCamlPro to support its work on <a href="https://tzscan.io/">TzScan</a>, a block explorer for the Tezos blockchain that will be made open-source.</p>
<p>OCamlPro is a French company and R&D lab, focused on OCaml and blockchain development. OCamlPro, which is an active community member and contributor to Tezos, has initiated several Tezos-related projects such as <a href="https://tzscan.io/">TzScan</a> and <a href="https://liquidity-lang.org/">Liquidity</a>, a high-level programming language for creating smart contracts in Tezos with an online editor, compiler and debugger, and features a decompiler to audit Michelson contracts.</p>
<p>Open-source block explorers are a key component of a blockchain ecosystem by allowing users to more easily monitor transactions, network validators (“bakers”), and the health of a network. OCamlPro will also provide documentation on Tezos and continue to improve the TzScan API, which may be used by applications such as wallets.</p>
<p>The Tezos Foundation’s core mission is to support the long-term success of the Tezos protocol and ecosystem. By funding projects imagined by scientists, researchers, developers, entrepreneurs, and enthusiasts, the Foundation encourages decentralized development and robust participation.</p>
<p>More information <a href="https://tezos.foundation/news/tezos-foundation-issues-grant-to-ocamlpro-to-support-tzscan">here</a>.</p>
<h3>Curious about OCamlPro's blockchain activities?</h3>
<p>OCamlPro is a French software company and R&D lab, born in 2011 and located in Paris and Essonne. We are dedicated to improving the quality of software, through the use of formal methods, and we promote the use of OCaml, a fast and expressive, statically typed state-of-the-art programming language, matured for more than 30 years in the French public research lab Inria.</p>
<p>In 2014, OCamlPro has been involved in the Tezos project, helping with the Tezos protocol design and developing the prototype of Tezos, later to become the official Tezos software. In 2017, OCamlPro developed the ICO infrastructure for Tezos, including Bitcoin and Ethereum smart-contracts. OCamlPro self-funded two big projects around Tezos:</p>
<ul>
<li>The<a href="https://tzscan.io/"> TzScan</a> block-explorer for Tezos: TzScan provides many features specific to Tezos delegated proof-of-stake protocol, to make life easier for bakers. TzScan API can be used by applications, such as wallets and delegation services to provide additional information to their users.
</li>
<li>The <a href="https://liquidity-lang.org/">Liquidity</a> language for Tezos smart-contracts. Liquidity is a programming language, compiled to Michelson. Its online editor can be used to write, deploy, run and debug smart contracts. It also features a decompiler from Michelson, that can be used to audit contracts written in other languages.
</li>
</ul>
<p>In 2018, OCamlPro worked with the Tezos Foundation and Tezos Core Development team to prepare the launch of the betanet network, and later, the mainnet network.</p>
<p>With a team of 10 PhD-level developers working on Tezos, OCamlPro is one of the biggest spot of knowledge on Tezos. OCamlPro can provide many services to the Tezos community: improvement of Tezos software, development of specific software, features and new protocols, training and consulting and smart contract design, writing, and auditing. With tight connections with Inria and other French research labs and universities, OCamlPro is also involved in several research projects, related to blockchains or formal methods:</p>
<ul>
<li>Formal methods: OCamlPro is involved in collaborative projects with academic and industrial partners to develop tools for software verification, such as the Alt-Ergo SMT Solver (from LRI).
</li>
<li>OCaml tooling: we help optimize OCaml (flambda) and design development tools for OCaml (open-source most of the time). Such tools range from command-line tools (such as OPAM or ocp-build), or GUI tools (the OCaml Memory Profiler), to web-based tools (TryOCaml, the OCaml MOOC with the learn-OCaml platform of the OCaml Foundation of Inria).
</li>
</ul>
opam 2.0.0 release and repository upgradehttps://ocamlpro.com/blog/2018_09_19_opam_2.0.0_release_and_repository_upgrade2018-09-19T08:12:13Z2018-09-19T08:12:13Z
Raja Boujbel
Louis Gesbert
We are happy to announce the final release of opam 2.0.0. A few weeks ago, we released a last release candidate to be later promoted to 2.0.0, synchronised with the opam package repository upgrade. You are encouraged to update as soon as you see fit, to continue to get package updates: opam 2.0.0 su...<p>We are happy to announce the final release of <a href="https://github.com/ocaml/opam/releases/tag/2.0.0">opam 2.0.0</a>.</p>
<p>A few weeks ago, we released a <a href="https://opam.ocaml.org/blog/opam-2-0-0-rc4">last release candidate</a> to be later promoted to 2.0.0, synchronised with the <a href="https://github.com/ocaml/opam-repository">opam package repository</a> <a href="https://opam.ocaml.org/blog/opam-2-0-0-repo-upgrade-roadmap/">upgrade</a>.</p>
<p>You are encouraged to update as soon as you see fit, to continue to get package updates: opam 2.0.0 supports the older formats, and 1.2.2 will no longer get regular updates. See the <a href="http://opam.ocaml.org/2.0-preview/doc/Upgrade_guide.html">Upgrade Guide</a> for details about the new features and changes.</p>
<p>The website opam.ocaml.org has been updated, with the full 2.0.0 documentation pages. You can still find the documentation for the previous versions in the corresponding menu.</p>
<p>Package maintainers should be aware of the following:</p>
<ul>
<li>the master branch of the <a href="https://github.com/ocaml/opam-repository">opam package repository</a> is now in the 2.0.0 format
</li>
<li>package submissions must accordingly be made in the 2.0.0 format, or using the new version of <code>opam-publish</code> (2.0.0)
</li>
<li>anything that was merged into the repository in 1.2 format has been automatically updated to the 2.0.0 format
</li>
<li>the 1.2 format repository has been forked to its own branch, and will only be updated for critical fixes
</li>
</ul>
<p>For custom repositories, the <a href="https://opam.ocaml.org/blog/opam-2-0-0-repo-upgrade-roadmap/#Advice-for-custom-repository-maintainers">advice</a> remains the same.</p>
<hr />
<p>Installation instructions (unchanged):</p>
<ol>
<li>From binaries: run
</li>
</ol>
<pre><code class="language-shell-session">sh <(curl -sL https://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh)
</code></pre>
<p>or download manually from <a href="https://github.com/ocaml/opam/releases/tag/2.0.0">the Github "Releases" page</a> to your PATH. In this case, don't forget to run <code>opam init --reinit -ni</code> to enable sandboxing if you had version 2.0.0~rc manually installed.</p>
<ol start="2">
<li>From source, using opam:
</li>
</ol>
<pre><code class="language-shell-session">opam update; opam install opam-devel
</code></pre>
<p>(then copy the opam binary to your PATH as explained, and don't forget to run <code>opam init --reinit -ni</code> to enable sandboxing if you had version 2.0.0~rc manually installed)</p>
<ol start="3">
<li>From source, manually: see the instructions in the <a href="https://github.com/ocaml/opam/tree/2.0.0-rc4#compiling-this-repo">README</a>.
</li>
</ol>
<p>We hope you enjoy this new major version, and remain open to <a href="https://github.com/ocaml/opam/issues">bug reports</a> and <a href="https://github.com/ocaml/opam/issues">suggestions</a>.</p>
<blockquote>
<p>NOTE: this article is cross-posted on <a href="https://opam.ocaml.org/blog/">opam.ocaml.org</a> and <a href="/blog">ocamlpro.com</a>.</p>
</blockquote>
Last stretch! Repository upgrade and opam 2.0.0 roadmaphttps://ocamlpro.com/blog/2018_08_02_last_stretch_repository_upgrade_and_opam_2.0.0_roadmap2018-08-02T08:12:13Z2018-08-02T08:12:13Z
Raja Boujbel
Louis Gesbert
A few days ago, we released opam 2.0.0~rc4, and explained that this final release candidate was expected be promoted to 2.0.0, in sync with an upgrade to the opam package repository. So here are the details about this! If you are an opam user, and don't maintain opam packages You are encouraged to u...<p>A few days ago, we released <a href="https://opam.ocaml.org/blog/opam-2-0-0-rc4/">opam 2.0.0~rc4</a>, and explained that this final release candidate was expected be promoted to 2.0.0, in sync with an upgrade to the <a href="https://github.com/ocaml/opam-repository">opam package repository</a>. So here are the details about this!</p>
<h2>If you are an opam user, and don't maintain opam packages</h2>
<ul>
<li>
<p>You are encouraged to <a href="https://opam.ocaml.org/blog/opam-2-0-0-rc4/">upgrade</a>) as soon as comfortable, and get used to the <a href="http://opam.ocaml.org/2.0-preview/doc/Upgrade_guide.html">changes and new features</a></p>
</li>
<li>
<p>All packages installing in opam 1.2.2 should exist and install fine on 2.0.0~rc4 (if you find one that doesn't, <a href="https://github.com/ocaml/opam/issues">please report</a>!)</p>
</li>
<li>
<p>If you haven't updated by <strong>September 17th</strong>, the amount of updates and new packages you receive may become limited<a href="#foot-1">¹</a>.</p>
</li>
</ul>
<h2>So what will happen on September 17th ?</h2>
<ul>
<li>
<p>Opam 2.0.0~rc4 gets officially released as 2.0.0</p>
</li>
<li>
<p>On the <code>ocaml/opam-repository</code> Github repository, a 1.2 branch is forked, and the 2.0.0 branch is merged into the master branch</p>
</li>
<li>
<p>From then on, pull-requests to <code>ocaml/opam-repository</code> need to be in 2.0.0 format. Fixes to the 1.2 repository can be merged if important: pulls need to be requested against the 1.2 branch in that case.</p>
</li>
<li>
<p>The opam website shows the 2.0.0 repository by default (https://opam.ocaml.org/2.0-preview/ becomes https://opam.ocaml.org/)</p>
</li>
<li>
<p>The http repositories for 1.2 and 2.0 (as used by <code>opam update</code>) are accordingly moved, with proper redirections put in place</p>
</li>
</ul>
<h2>Advice for package maintainers</h2>
<ul>
<li>
<p>Until September 17th, pull-requests filed to the master branch of <code>ocaml/opam-repository</code> need to be in 1.2.2 format</p>
</li>
<li>
<p>The CI checks for all PRs ensure that the package passes on both 1.2.2 and 2.0.0. After the 17th of september, only 2.0.0 will be checked (and 1.2.2 only if relevant fixes are required).</p>
</li>
<li>
<p>The 2.0.0 branch of the repository will contain the automatically updated 2.0.0 version of your package definitions</p>
</li>
<li>
<p>You can publish 1.2 packages while using opam 2.0.0 by installing <code>opam-publish.0.3.5</code> (running <code>opam pin opam-publish 0.3.5</code> is recommended)</p>
</li>
<li>
<p>You should only need to keep an opam 1.2 installation for more complex setups (multiple packages, or if you need to be able to test the 1.2 package installations locally). In this case you might want to use an alias, <em>e.g.</em> <code>alias opam.1.2="OPAMROOT=$HOME/.opam.1.2 ~/local/bin/opam.1.2</code>. You should also probably disable opam 2.0.0's automatic environment update in that case (<code>opam init --disable-shell-hook</code>)</p>
</li>
<li>
<p><code>opam-publish.2.0.0~beta</code> has a fully revamped interface, and many new features, like filing a single PR for multiple packages. It files pull-request <strong>in 2.0 format only</strong>, however. At the moment, it will file PR only to the 2.0.0 branch of the repository, but pushing 1.2 format packages to master is still preferred until September 17th.</p>
</li>
<li>
<p>It is also advised to keep in-source opam files in 1.2 format until that date, so as not to break uses of <code>opam pin add --dev-repo</code> by opam 1.2 users. The small <code>opam-package-upgrade</code> plugin can be used to upgrade single 1.2 <code>opam</code> files to 2.0 format.</p>
</li>
<li>
<p><a href="https://github.com/ocaml/ocaml-ci-scripts"><code>ocaml-ci-script</code></a> already switched to opam 2.0.0. To keep testing opam 1.2.2, you can set the variable <code>OPAM_VERSION=1.2.2</code> in the <code>.travis.yml</code> file.</p>
</li>
</ul>
<h2>Advice for custom repository maintainers</h2>
<ul>
<li>
<p>The <code>opam admin upgrade</code> command can be used to upgrade your repository to 2.0.0 format. We recommand using it, as otherwise clients using opam 2.0.0 will do the upgrade locally every time. Add the option <code>--mirror</code> to continue serving both versions, with automatic redirects.</p>
</li>
<li>
<p>It's your place to decide when/if you want to switch your base repository to 2.0.0 format. You'll benefit from many new possibilities and safety features, but that will exclude users of earlier opam versions, as there is no backwards conversion tool.</p>
</li>
</ul>
<p><a id="foot-1">¹</a> Sorry for the inconvenience. We'd be happy if we could keep maintaining the 1.2.2 repository for more time; repository maintainers are doing an awesome job, but just don't have the resources to maintain both versions in parallel.</p>
opam 2.0.0 RC4-final is out!https://ocamlpro.com/blog/2018_07_26_opam_2.0.0_rc4_final_is_out2018-07-26T08:12:13Z2018-07-26T08:12:13Z
Raja Boujbel
Louis Gesbert
We are happy to announce the opam 2.0.0 final release candidate! 🍾 This release features a few bugfixes over Release Candidate 3. It will be promoted to 2.0.0 proper within a few weeks, when the official repository format switches from 1.2.0 to 2.0.0. After that date, updates to the 1.2.0 reposit...<p>We are happy to announce the <a href="https://github.com/ocaml/opam/releases/tag/2.0.0-rc4">opam 2.0.0 final release candidate</a>! 🍾</p>
<p>This release features a few bugfixes over <a href="/2018/07/26/opam-2-0-0-rc3">Release Candidate 3</a>. <strong>It will be promoted to 2.0.0 proper within a few weeks, when the <a href="https://github.com/ocaml/opam-repository">official repository</a> format switches from 1.2.0 to 2.0.0.</strong> After that date, updates to the 1.2.0 repository may become limited, as new features are getting used in packages.</p>
<p>It is safe to update as soon as you see fit, since opam 2.0.0 supports the older formats. See the <a href="https://opam.ocaml.org/2.0-preview/doc/Upgrade_guide.html">Upgrade Guide</a> for details about the new features and changes. If you are a package maintainer, you should keep publishing as before for now: the <a href="https://opam.ocaml.org/blog/opam-2-0-0-repo-upgrade-roadmap">roadmap</a> for the repository upgrade will be detailed shortly.</p>
<p>The opam.ocaml.org pages have also been refreshed a bit, and the new version showing the 2.0.0 branch of the repository is already online at <a href="https://opam.ocaml.org/2.0-preview/">https://opam.ocaml.org/2.0-preview/</a> (report any issues <a href="https://github.com/ocaml/opam2web/issues">here</a>).</p>
<hr />
<p>Installation instructions:</p>
<ol>
<li>From binaries: run
</li>
</ol>
<pre><code class="language-shell-session">sh <(curl -sL https://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh)
</code></pre>
<p>or download manually from <a href="https://github.com/ocaml/opam/releases/tag/2.0.0-rc4">the Github "Releases" page</a> to your PATH. In this case, don't forget to run <code>opam init --reinit -ni</code> to enable sandboxing if you had version 2.0.0~rc manually installed.</p>
<ol start="2">
<li>From source, using opam:
</li>
</ol>
<pre><code class="language-shell-session">opam update; opam install opam-devel
</code></pre>
<p>(then copy the opam binary to your PATH as explained, and don't forget to run <code>opam init --reinit -ni</code> to enable sandboxing if you had version 2.0.0~rc manually installed)</p>
<ol start="3">
<li>From source, manually: see the instructions in the <a href="https://github.com/ocaml/opam/tree/2.0.0-rc4#compiling-this-repo">README</a>.
</li>
</ol>
<p>We hope you enjoy this new version, and remain open to <a href="https://github.com/ocaml/opam/issues">bug reports</a> and <a href="https://github.com/ocaml/opam/issues">suggestions</a>.</p>
<blockquote>
<p>NOTE: this article is cross-posted on <a href="https://opam.ocaml.org/blog/">opam.ocaml.org</a> and <a href="/blog">ocamlpro.com</a>.</p>
</blockquote>
OCamlPro’s Tezos block explorer TzScan’s last updates https://ocamlpro.com/blog/2018_07_20_new_updates_on_tzscan_22018-07-20T08:12:13Z2018-07-20T08:12:13Z
Çagdas Bozman
OCamlPro is pleased to announce the latest update of TZScan (https://tzscan.io), its Tezos block explorer to ease the use of the Tezos network. TzScan is now ready for the protocol update scheduled for tomorrow. In addition to some minor bugfixes, the main novelties are: Displaying of obtained and e...<p>OCamlPro is pleased to announce the latest update of TZScan (<a href="http://tzscan.io">https://tzscan.io</a>), its Tezos block explorer to ease the use of the Tezos network. TzScan is now ready for the protocol update scheduled for tomorrow. In addition to some minor bugfixes, the main novelties are:</p>
<ul>
<li>Displaying of obtained and <a href="https://tzscan.io/tz3UoffC7FG7zfpmvmjUmUeAaHvzdcUvAj6r?default=rewards">expected rewards</a>
</li>
<li>Adding of <a href="https://tzscan.io/tz3UoffC7FG7zfpmvmjUmUeAaHvzdcUvAj6r">internal transactions</a> of smart contracts
</li>
<li>Adding of <a href="https://tzscan.io/delegation-services">delegation services</a>
</li>
<li>Aliasing of known account and sponsors
</li>
<li>Improvements of UX, and faster navigation
</li>
<li>Improvements on desktop, tablets and mobiles
</li>
</ul>
<p>We continue to maintain the alphanet and zeronet branches in parallel of the betanet.</p>
<p>We keep on working hard to improve and add new features to TzScan. Further enhancements and optimizations are to come. Enjoy and play with our explorer!
If you have any suggestions or bugs to report, please notify us at <a href="mailto:contact@tzscan.io">contact@tzscan.io </a></p>
opam 2.0.0 Release Candidate 3 is out!https://ocamlpro.com/blog/2018_06_22_opam_2.0.0_release_candidate_3_is_out2018-06-22T08:12:13Z2018-06-22T08:12:13Z
Raja Boujbel
Louis Gesbert
We are pleased to announce the release of a third release candidate for opam 2.0.0. This one is expected to be the last before 2.0.0 comes out. Changes since the 2.0.0~rc2 are, as expected, mostly fixes. We deemed it useful, however, to bring in the following: a new command opam switch link that all...<p>We are pleased to announce the release of a third release candidate for opam 2.0.0. This one is expected to be the last before 2.0.0 comes out.</p>
<p>Changes since the <a href="../opam-2-0-0-rc2">2.0.0~rc2</a> are, as expected, mostly fixes. We deemed it useful, however, to bring in the following:</p>
<ul>
<li>a new command <code>opam switch link</code> that allows to select a switch to be used in a given directory (particularly convenient if you use the shell hook for automatic opam environment update)
</li>
<li>a new option <code>opam install --assume-built</code>, that allows to install a package using its normal opam procedure, but for a source repository that has been built by hand. This fills a gap that remained in the local development workflows.
</li>
</ul>
<p>The preview of the opam 2 webpages can be browsed at http://opam.ocaml.org/2.0-preview/ (please report issues <a href="https://github.com/ocaml/opam2web/issues">here</a>).</p>
<p>Installation instructions (unchanged):</p>
<ol>
<li>From binaries: run
</li>
</ol>
<pre><code class="language-shell-session">sh <(curl -sL https://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh)
</code></pre>
<p>or download manually from <a href="https://github.com/ocaml/opam/releases/tag/2.0.0-rc3">the Github "Releases" page</a> to your PATH. In this case, don't forget to run <code>opam init --reinit -ni</code> to enable sandboxing if you had version 2.0.0~rc manually installed.</p>
<ol start="2">
<li>From source, using opam:
</li>
</ol>
<pre><code class="language-shell-session">opam update; opam install opam-devel
</code></pre>
<p>(then copy the opam binary to your PATH as explained, and don't forget to run <code>opam init --reinit -ni</code> to enable sandboxing if you had version 2.0.0~rc manually installed)</p>
<ol start="3">
<li>From source, manually: see the instructions in the <a href="https://github.com/ocaml/opam/tree/2.0.0-rc3#compiling-this-repo">README</a>.
</li>
</ol>
<p>Thanks a lot for testing out this new RC and <a href="https://github.com/ocaml/opam/issues">reporting</a> any issues you may find.</p>
opam 2.0.0 Release Candidate 2 is out!https://ocamlpro.com/blog/2018_05_22_opam_2.0.0_release_candidate_2_is_out2018-05-22T08:12:13Z2018-05-22T08:12:13Z
Louis Gesbert
We are pleased to announce the release of a second release candidate for opam 2.0.0. This new version brings us very close to a final 2.0.0 release, and in addition to many fixes, features big performance enhancements over the RC1. Among the new features, we have squeezed in full sandboxing of packa...<p>We are pleased to announce the release of a second release candidate for opam 2.0.0.</p>
<p>This new version brings us very close to a final 2.0.0 release, and in addition to many fixes, features big performance enhancements over the RC1.</p>
<p>Among the new features, we have squeezed in full sandboxing of package commands for both Linux and macOS, to protect our users from any <a href="http://opam.ocaml.org/blog/camlp5-system/">misbehaving scripts</a>.</p>
<blockquote>
<p>NOTE: if upgrading manually from 2.0.0~rc, you need to run
<code>opam init --reinit -ni</code> to enable sandboxing.</p>
</blockquote>
<p>The new release candidate also offers the possibility to setup a hook in your shell, so that you won't need to run <code>eval $(opam env)</code> anymore. This is specially useful in combination with local switches, because with it enabled, you are guaranteed that running <code>make</code> from a project directory containing a local switch will use it.</p>
<p>The documentation has also been updated, and a preview of the opam 2 webpages can be browsed at http://opam.ocaml.org/2.0-preview/ (please report issues <a href="https://github.com/ocaml/opam2web/issues">here</a>). This provides the list of packages available for opam 2 (the <code>2.0</code> branch of <a href="https://github.com/ocaml/opam-repository/tree/2.0.0">opam-repository</a>), including the <a href="https://opam.ocaml.org/2.0-preview/packages/ocaml-base-compiler/">compiler packages</a>.</p>
<p>Installation instructions:</p>
<ol>
<li>From binaries: run
</li>
</ol>
<pre><code class="language-shell-session">sh <(curl -sL https://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh)
</code></pre>
<p>or download manually from <a href="https://github.com/ocaml/opam/releases/tag/2.0.0-rc2">the Github "Releases" page</a> to your PATH. In this case, don't forget to run <code>opam init --reinit -ni</code> to enable sandboxing if you had version 2.0.0~rc manually installed.</p>
<ol start="2">
<li>From source, using opam:
</li>
</ol>
<pre><code class="language-shell-session">opam update; opam install opam-devel
</code></pre>
<p>(then copy the opam binary to your PATH as explained, and don't forget to run <code>opam init --reinit -ni</code> to enable sandboxing if you had version 2.0.0~rc manually installed)</p>
<ol start="3">
<li>From source, manually: see the instructions in the <a href="https://github.com/ocaml/opam/tree/2.0.0-rc2#compiling-this-repo">README</a>.
</li>
</ol>
<p>Thanks a lot for testing out this new RC and <a href="https://github.com/ocaml/opam/issues">reporting</a> any issues you may find.</p>
<blockquote>
<p>NOTE: this article is cross-posted on <a href="https://opam.ocaml.org/blog/">opam.ocaml.org</a> and <a href="/blog">ocamlpro.com</a>.</p>
</blockquote>
Release of Alt-Ergo 2.2.0https://ocamlpro.com/blog/2018_04_23_release_of_alt_ergo_2_2_02018-04-23T08:12:13Z2018-04-23T08:12:13Z
Mohamed Iguernlala
A new release of Alt-Ergo (version 2.2.0) is available. You can get it from Alt-Ergo's website. An OPAM package for it will be published in the next few days. The major novelty of this release is a new experimental front-end that supports the SMT-LIB 2 language, extended prenex polymorphism. This ex...<p>A new release of Alt-Ergo (version 2.2.0) is available.</p>
<p>You can get it from <a href="https://alt-ergo.ocamlpro.com/#releases">Alt-Ergo's website</a>. An OPAM package for it will be published in the next few days.</p>
<p>The major novelty of this release is a new experimental front-end that supports the SMT-LIB 2 language, extended prenex polymorphism. This extension is implemented as a standalone library, and is available <a href="https://github.com/OCamlPro/alt-ergo/blob/2.2.0/sources/CHANGES">here</a>: <a href="https://github.com/Coquera/psmt2-frontend">https://github.com/Coquera/psmt2-frontend</a></p>
<p>The full list of CHANGES is available <a href="https://github.com/OCamlPro/alt-ergo/blob/2.2.0/sources/CHANGES">here</a>. As usual, do not hesitate to report bugs, to ask questions, or to give your feedback!</p>
Taskforce on the Tezos Protocol, and TzScan evolutionhttps://ocamlpro.com/blog/2018_04_13_taskforce_on_the_tezos_protocol_and_tzscan_evolution2018-04-13T08:12:13Z2018-04-13T08:12:13Z
Michael Laporte
As we are preparing to work on the Tezos Protocol, we're still actively keeping the pace on the block explorer TZScan.io, adding cool information for baking accounts. We'd like to allow people to see who is contributing to the network and to understand the distribution of rolls, rights, etc. For sta...<p>As we are preparing to <a href="https://twitter.com/TezosFoundation/status/984814729213480960">work on the Tezos Protocol</a>, we're still actively keeping the pace on the block explorer TZScan.io, adding cool information for baking accounts. We'd like to allow people to see who is contributing to the network and to understand the distribution of rolls, rights, etc.</p>
<p>For starters, we are showing the roll balance used for baking in the current cycle and the rolls history of a baker.</p>
<blockquote>
<p><a href="https://tzscan.io/tz1MqVR7hnZwH1FoQ7swjamanNxrXtNVAQ7v?default=baking">https://tzscan.io/tz1MqVR7hnZwH1FoQ7swjamanNxrXtNVAQ7v?default=baking</a></p>
</blockquote>
<p>Enjoy, more to come in the next weeks!</p>
OCaml JTRThttps://ocamlpro.com/blog/2018_04_01_ocaml_jtrt2018-04-01T08:12:13Z2018-04-01T08:12:13Z
chambart
This time of the year is, just like Christmas time, a time for laughs and magic... although the magic we are talking about, in the OCaml community, is not exactly nice, nor beautiful. Let's say that we are somehow akin to many religions: we know magic does exist , but that it is satanic and shouldn'...<p>This time of the year is, just like Christmas time, a time for laughs and magic... although the magic we are talking about, in the OCaml community, is not exactly nice, nor beautiful. Let's say that we are somehow akin to many religions: we know magic <a href="http://caml.inria.fr/pub/docs/manual-ocaml/libref/Obj.html#VALmagic"><em>does</em> exist</a> , but that it is <a href="https://en.wikipedia.org/wiki/Religious_debates_over_the_Harry_Potter_series">satanic and shouldn't be introduced to children</a>.</p>
<h2>Introducing Just The Right Time (JTRT)</h2>
<p>Let me first introduce you to the concept of 'Just The Right Time' <a href="#footnote1">[1]</a>. JTRT is somehow a 'Just In Time' compiler, but one that runs at <em>the</em> right time, not at some random moment decided by a contrived heuristic.</p>
<p>How does the compiler know when that specific good moment occurs? Well, he doesn't, and that's the point: you certainly know far better. In the OCaml world, we like good performances, like any other, but we prefer predictable ones to performances that may sometimes be awesome, and sometimes really slow. And we are ready to trade off some annotations for better predictability (<em>or is it just me trying to give the impression that my opinion is everyone's opinion...</em>). Don't forget that OCaml is a compiled language; hence the average generated code is good enough. Runtime compilation only matters for some subtle situations where a patterns gets repeated a lot, and you don't know about that pattern before receiving some inputs.</p>
<p>Of course the tradeoff wouldn't be the same in Javascript if you had to write something like that to get your code to perform decently.</p>
<pre><code class="language-javascript">function fact(n) {
"compile this";
if (n == 0) {
"compile this too";
return 1
} else {
"Yes, I really want to compile that";
return (n * fact(n - 1););
}
}
</code></pre>
<h2>The magical <code>this_is_the_right_time</code> function</h2>
<p>There are already nice tools for doing that in OCaml. In particular, you should look at metaocaml, which is an extension of the language that has been maintained for years. But it requires you to think a bit about what your program is doing and add a few types, here and there.</p>
<p>Fortunately, today is the day you may want to try this ugly weekend hack instead.</p>
<p>To add a bit of context, let's say there are 1/ the Dirty Little Tricks, and 2/ the Other Kind of Ugly Hacks. We are presenting one of the latter; the kind of hacks for which you are both ashamed <em>and</em> a bit proud (but you should really be a lot more ashamed). I've made quite a few of those, and this one would probably rank well among the top 5 (and I'm deeply sorry about the other ones that are still in production somewhere...).</p>
<p>This is composed of two parts: a small compiler patch, and a runtime library. That library only exposes the following single function:</p>
<pre><code class="language-ocaml">val this_is_the_right_time : 'a -> 'a
</code></pre>
<p>Let's take an example:</p>
<pre><code class="language-ocaml">let f x =
let y = x + x in
let g z = z * y in
g
let multiply_by_six = f 3
</code></pre>
<p>You can 'optimize' it by changing it to:</p>
<pre><code class="language-ocaml">let f x =
let y = x + x in
let g z = z * y in
g
let multiply_by_six = this_is_the_right_time (f 3)
</code></pre>
<p>That's all. By stating that this is the right time, you told the compiler to take that function and do its magic on it.</p>
<h2>How <em>the f</em><em>k</em> does that work?!</h2>
<p>The compiler patch is quite simple. It adds to every function some annotation to allow the compiler to know enough things about it. (It is annotated with its representation in the Flambda IR.) This is just a partial dump of the compiler memory state when transforming the Flambda IR to clambda. I tried to do it in some more 'disciplined' way (it used some magic to traverse the compiler internal memory representation to create a static version of it in the binary), but 'ld' was not so happy linking a ~500MB binary. So I went the 'marshal' way.</p>
<p>This now means that at runtime the program can find the representation of the closures. To give an example of the kind of code you really shouldn't write, here is the magic invocation to retrieve that representation:</p>
<pre><code class="language-ocaml">let extract_representation_from_closure (value:'a)
: Flambda.set_of_closures =
let obj = Obj.repr value in
let size = Obj.size obj in
let id = Obj.obj (Obj.field obj (size - 2)) in
let marshalled = Obj.field obj (size - 1) in
(Marshal.from_string marshalled 0).(id)
</code></pre>
<p>With that, we now know the layout of the closure and we can extract all the variables that it binds. We can further inspect the value of those bound variables, and build an IR representation for them. That's the nice thing about having an untyped IR, you can produce some even when you lost the types. It will just probably be quite wrong, but who cares...</p>
<p>Now that we know everything about our closure, we can rebuild it, and so will we. As we can't statically build a non-closed function (the flambda IR happens after closure conversion), we will instead build a closed function that allocates the closure for us. For our example, it would look like this:</p>
<pre><code class="language-ocaml">let build_my_closure previous_version_of_the_closure =
let closure_field_y = previous_version_of_the_closure.y in
fun z -> z * 6 (* closure_field_y * closure_field_y *)
</code></pre>
<p>In that case the function that we are building is closed, so we don't need the old closure to extract its field. But this shows the generic pattern. This would be used like that:</p>
<pre><code class="language-ocaml">let this_is_the_right_time optimize_this =
let ir_version = extract_representation_from_closure optimize_this in
let build_my_closure = magic_building_function ir_version in
build_my_closure optimize_this
</code></pre>
<p>I won't go too much into the details of the <code>magic_building_function</code>, because it would be quite tedious. Let's just say that it is using mechanisms provided for the native toplevel of OCaml.</p>
<h2>A more sensible example</h2>
<p>To finish on something a bit more interesting than <code>time_6</code>, let's suppose that we designed a super nice language whose AST and evaluator are:</p>
<pre><code class="language-ocaml">type expr =
| Add of expr * expr
| Const of int
| Var
let rec eval_expr expr x =
match expr with
| Add (e1, e2) -> eval_expr e1 x + eval_expr e2 x
| Const i -> i
| Var -> x
</code></pre>
<p>But we want to optimize it a bit, and hence wrote a super powerful pass:</p>
<pre><code class="language-ocaml">let rec optimize expr =
match expr with
| Add (Const n1, Add (e, Const n2)) -> Add (Const (n1 + n2), optimize e)
| Add (e1, e2) -> Add (optimize e1, optimize e2)
| _ -> expr
</code></pre>
<p>The user writes some expression, that gets parsed to <code>Add (Const 11, Add (Var, Const 22))</code>, it goes through optimizing and results in <code>Add (Const 33, Var)</code>. Then you find that this looks like <em>the right time</em>.</p>
<pre><code class="language-ocaml">let optimized =
this_is_the_right_time
(fun x -> (eval_expr (optimize user_ast) x))
</code></pre>
<p>Annnnd... nothing happens. The reason being that there is no way to distinguish between mutable and immutable values at runtime, hence the safe assumption is to assume that everything is mutable, which limits optimizations a lot. So let's enable the 'special' mode:</p>
<pre><code class="language-ocaml">incorrect_mode := true
</code></pre>
<p>And MAGIC happens! The code that gets spitted out is exactly what we want (that is <code>fun x -> 33 + x</code>).</p>
<h2>Conclusion</h2>
<p>Just so that you know, I don't really recommend using it. It's buggy, and many details are left unresolved (I suspect that the names you would come up for that kind of <em>details</em> would often sound like 'segfault'). Flambda was not designed to be used that way. In particular, there are some invariants that must be maintained, like the uniqueness of variables and functions... that we completely disregarded. That lead to some 'funny' behaviors (like <code>power 2 8</code> returning <code>512</code>...). It is possible to do that correctly, but that would require far more than a few hours' hacking. This might be a lot easier with the upcoming version of Flambda.</p>
<p>So this is far from ready, and it's not going to be anytime soon (<em>supposing that this is a good idea, which I'm still not convinced it is</em>).</p>
<p>But if you still want to play with it: <a href="https://github.com/chambart/ocaml-1/tree/flambda_jit">the sources are available.</a></p>
<hr />
<p><span id="footnote1">[1]</span> Not that it exists in real-world.</p>
Release of Alt-Ergo 2.1.0 https://ocamlpro.com/blog/2018_03_15_release_of_alt_ergo_2_1_02018-03-15T08:12:13Z2018-03-15T08:12:13Z
Mohamed Iguernlala
A new release of Alt-Ergo (version 2.1.0) is available on Alt-Ergo's website: https://alt-ergo.ocamlpro.com/#releases. An OPAM package for it will be published soon. In this release, we mainly improved the CDCL-based SAT solver to get performances similar to/better than the old Tableaux-like SAT. Th...<p>A new release of Alt-Ergo (version 2.1.0) is available on Alt-Ergo's website: <a href="https://alt-ergo.ocamlpro.com/#releases">https://alt-ergo.ocamlpro.com/#releases</a>. An OPAM package for it will be published soon.</p>
<p>In this release, we mainly improved the CDCL-based SAT solver to get performances similar to/better than the old Tableaux-like SAT. The CDCL solver is now the default Boolean reasoner. The full list of CHANGES is available <a href="https://github.com/OCamlPro/alt-ergo/blob/2.1.0/sources/CHANGES">here</a>.</p>
<p>Despite our various tests, you may still encounter some issues with this new solver. Please, don't hesitate to report bugs, ask questions, and give your feedback!</p>
New updates on TzScan https://ocamlpro.com/blog/2018_03_14_new_updates_on_tzscan2018-03-14T08:12:13Z2018-03-14T08:12:13Z
Çagdas Bozman
Update - TZScan.io can now work on top of the zeronet (zeronet.tzscan.io), we hope it can help the developers community monitor the network. You can now switch between the alphanet & zeronet networks! OCamlPro is pleased to announce an update of TzScan (https://tzscan.io), its Tezos block explorer t...<blockquote>
<p>Update - <a href="https://tzscan.io/">TZScan.io</a> can now work on top of the zeronet (<a href="https://zeronet.tzscan.io/">zeronet.tzscan.io</a>), we hope it can help the developers community monitor the network. You can now switch between the alphanet & zeronet networks!</p>
</blockquote>
<p>OCamlPro is pleased to announce an update of TzScan
(https://tzscan.io), its Tezos block explorer to ease the use of the
Tezos network.</p>
<p>In addition to some minor bugfixes, the main novelties are:</p>
<ul>
<li><a href="https://tzscan.io/health">Health</a> of the network with stats about the blocks, endorsements, bakers, etc.
</li>
<li>Display of future<a href="https://tzscan.io/baking-rights"> baker’s rights</a> in the current cycle
</li>
<li>For each account, a more <a href="https://tzscan.io/tz1UsgSSdRwwhYrqq7iVp2jMbYvNsGbWTozp">detailed balance</a> including the bonds, rewards, fees, etc. for the current cycle and its future basking positions
</li>
<li>A new feature to <a href="http://tzscan.io/inject-signed-operation">inject signed</a> operations in the network
</li>
<li>In the detailed block’s view, all blocks are displayed at the same level in alternative chains
</li>
<li>UI improvements on desktop, tablet and mobile
</li>
</ul>
<p>We are still working hard trying to improve and add new features to
TzScan. Further enhancements and optimizations are to come. Enjoy and
play with our explorer.</p>
<p>If you have suggestions or bugs, please send us reports at contact@tzscan.io</p>
Release of a first version of TzScan, a Tezos block explorer https://ocamlpro.com/blog/2018_02_14_release_of_a_first_version_of_tzscan_io_a_tezos_block_explorer2018-02-14T08:12:13Z2018-02-14T08:12:13Z
Çagdas Bozman
OCamlPro is proud to release a first version of TzScan, its Tezos block explorer to ease the use of the Tezos network. What TzScan can do for you : Several charts on blocks, operations, network, volumes, fees, and more,
Marketcap and Futures/IOU prices from coinmarket.com,
Blocks, operations, accoun...<p><img src="/blog/assets/img/logo_tzscan_tezos_b_e.png" alt="" /></p>
<p>OCamlPro is proud to release a first version of <a href="https://tzscan.io/">TzScan</a>, its Tezos
block explorer to ease the use of the Tezos network.</p>
<p>What TzScan can do for you :</p>
<ul>
<li>Several charts on blocks, operations, network, volumes, fees, and more,
</li>
<li>Marketcap and Futures/IOU prices from coinmarket.com,
</li>
<li>Blocks, operations, accounts and contracts detail pages,
</li>
<li>Public API to get information about blocks, operations, accounts and more,
</li>
<li>Documentation on different concepts of Tezos like Endorsements, Nonces, etc.
</li>
</ul>
<p>What we tried to do with TzScan is to show differently the Tezos
network to have a better understanding of what is really going on by
showing the main points of Proof of Stake. Further enhancements and
optimization are to come but enjoy and play with our explorer.</p>
<p>If you have suggestions or bugs, please send us reports at contact@tzscan.io !</p>
OCamlPro’s Liquidity-lang demo at JFLA2018 – a smart-contract design language https://ocamlpro.com/blog/2018_02_08_liquidity_smart_contract_deploy_live_demo_on_tezos_alphanet_jfla20182018-02-08T08:12:13Z2018-02-08T08:12:13Z
Çagdas Bozman
As a tradition, we took part in this year's Journées Francophones des Langages Applicatifs (JFLA 2018) that was chaired by LRI's Sylvie Boldo and hosted in Banyuls the last week of January. That was a nice opportunity to present a live demo of a multisignature smart-contract entirely written in th...<p>As a tradition, we took part in this year's <a href="https://jfla.inria.fr/index.html"> Journées Francophones des Langages Applicatifs</a> (JFLA 2018) that was chaired by LRI's Sylvie Boldo and hosted in Banyuls the last week of January. That was a nice opportunity to present a <a href="https://twitter.com/OCamlPro/status/956574674477047808">live demo</a> of a multisignature smart-contract entirely written in the Liquidity language designed at OCamlPro, and deployed live on the Tezos alphanet <em>(the slides are now available, see at the end of the post)</em>.</p>
<p>Tezos is the only blockchain to use a <em>strongly</em> typed, <em>functional</em> language, with a formal semantic and an interpreter validated by the use of GADTs (generalized abstract data-types). This stack-based language, named <em>Michelson</em>, is somewhat tricky to use as-is, the absence of variables (among others) necessitating to manipulate the stack directly. For this reason, we have developed, starting in June 2017, a higher level language, <em>Liquidity</em>, implementing the type system of Michelson in a subset of OCaml.</p>
<p>In addition to the compiler which allows to compile Liquidity programs to Michelson ones, we have developed a decompiler which, from Michelson code, can recover a Liquidity version, much easier to look at and understand (for humans). This tool is of some significance considering that contracts will be stored on the blockchain in Michelson format, making them more approachable and understandable for end users.</p>
<p>To facilitate designing contracts and foster Liquidity adoption we have also developed a web application. This app offers somewhat bare-bone editors for Liquidity and Michelson, allows compilation in the browser directly, deployment of Liquidity contracts and interaction with them (using the Tezos alphanet).</p>
<p>This blog post presents these different tools in more detail.</p>
<h2>Michelson</h2>
<p>Michelson is a stack-based, functional, statically and strongly typed language. It comes with a set of built-in base types like strings, Booleans, unbounded integers and naturals, lists, pairs, option types, union (of two) types, sets, maps. There also a number of domain dependent types like amounts (in tezzies), cryptographic keys and signatures, dates, <em>etc</em>. A Michelson program consists in a <em>structured</em> sequence of instructions, each of which operates on the stack. The program takes as inputs a parameter as well as a storage and returns a result and a new value for the storage. They can fail at runtime with the instruction <code>FAIL</code>, or another error (call of a failing contract, out of gas, <em>etc.</em>), but most instructions that could fail return an option instead ( <em>e.g</em> <code>EDIV</code> returns <code>None</code> when dividing by zero). The following example is a smart contract which implements a voting system on the blockchain. The storage consists in a map from possible votes (as strings) to integers counting number of votes. A transaction to this contract must be made with an amount (accessible with instruction <code>AMOUNT</code>) greater or equal to 5 tezzies and a parameter which is a valid vote. If one of these conditions is not respected, the execution, and thus the transaction, fail. Otherwise the program retrieves the previous number of votes in the storage and increments them. At the end of the execution, the stack contains the pair composed of the value <code>Unit</code> and the updated map (the new storage).</p>
<pre><code class="language-makefile">parameter string;
storage (map string int);
return unit;
code
{ # Pile = [ Pair parameter storage ]
PUSH tez "5.00"; AMOUNT; COMPARE; LT;
IF # Is AMOUNT < 5 tz ?
{ FAIL }
{
DUP; DUP; CAR; DIP { CDR }; GET; # GET parameter storage
IF_NONE # Is it a valid vote ?
{ FAIL }
{ # Some x, x now in the stack
PUSH int 1; ADD; SOME; # Some (x + 1)
DIP { DUP; CAR; DIP { CDR } }; SWAP; UPDATE;
# UPDATE parameter (Some (x + 1)) storage
PUSH unit Unit; PAIR; # Pair Unit new_storage
}
};
}
</code></pre>
<p>Michelson has several specificities:</p>
<ul>
<li>Typing a Michelson program is done by <em>types propagation</em>, and not <em>à la Milner</em>. Polymorphic types are forbidden and type annotations are required when a type is ambiguous ( <em>e.g.</em> empty list).
</li>
<li>Functions (<em>lambdas</em>) are pure and are not closures, <em>i.e.</em> they must have an empty environment. For instance, a function passed to another contract as parameter acts in a purely functional way, only accessing the environment of the new contract.
</li>
<li>Method calls is preformed with the instruction <code>TRANSFER_TOKENS</code>: it requires an empty stack (not counting its arguments). It takes as argument the current storage, saves it before the call is made, and finally returns it after the call together with the result. This forces developers to save anything worth saving in the current storage, while keeping in mind that a <em>reentring</em> call can happend (the returned storage might be different).
</li>
</ul>
<p>We won't explain the semantics of Michelson here, a good one in big step form is available <a href="https://gitlab.com/tezos/tezos/blob/alphanet/src/proto/alpha/docs/language.md">here</a>.</p>
<h2>The Liquidity Language</h2>
<p>Liquidity is also a functional, statically and strongly typed language that compiles down to the stack-based language Michelson. Its syntax is a subset of OCaml and its semantic is given by its compilation schema (see below). By making the choice of staying close to Michelson in spirit while offering higher level constraints, Liquidity allows to easily write legible smart contracts with the same safety guaranties offered by Michelson. In particular we decided that it was important to keep the purely functional aspect of the language so that simply reading a contract is not obscured by effects and global state. In addition, the OCaml syntax makes Liquidity an <em>immediately accessible</em> tool to programmers who already know OCaml while its limited span makes the learning curve not too steep.</p>
<p>The following example is a liquidity version of the vote contract. Its inner workings are rather obvious for anyone who has already programmed in a ML-like language.</p>
<pre><code class="language-ocaml">[%%version 0.15]
type votes = (string, int) map
let%init storage (myname : string) =
Map.add myname 0 (Map ["ocaml", 0; "pro", 0])
let%entry main
(parameter : string)
(storage : votes)
: unit * votes =
let amount = Current.amount() in
if amount < 5.00tz then
Current.failwith "Not enough money, at least 5tz to vote"
else
match Map.find parameter storage with
| None -> Current.failwith "Bad vote"
| Some x ->
let storage = Map.add parameter (x+1) storage in
( (), storage )
</code></pre>
<p>A Liquidity contract starts with an optional version meta-information. The compiler can reject the program if it is written in a too old version of the language or if it is itself not recent enough. Then comes a set of type and function definitions. It is also possible to specify an initial storage (constant, or a non-constant storage initializer) with <code>let%init storage</code>. Here we define a type abbreviation <code>votes</code> for a map from strings to integers. It is the structure that we will use to store our vote counts.</p>
<p>The storage initializer creates a map containing two bindings, <code>"ocaml"</code> to <code>0</code> and <code>"pro"</code> to <code>0</code> to which we add another vote option depending on the argument <code>myname</code> given at deploy time.</p>
<p>The entry point of the program is a function <code>main</code> defined with a special annotation <code>let%entry</code>. It takes as arguments a call parameter (<code>parameter</code>) and a storage (<code>storage</code>) and returns a pair whose first element is the result of the call, and second element is a potentially modified storage.</p>
<p>The above program defines a local variable <code>amount</code> which contains the amount of the transaction which generated the call. It checks that it is greater than 5 tezzies. If not, we fail with an explanatory message. Then the program retrieves the number of votes for the chosen option given as parameter. If the vote is not a valid one (<em>i.e.</em>, there is no binding in the map), execution fails. Otherwise, the current number of votes is bound to the name <code>x</code>. Storage is updated by incrementing the number of votes for the chosen option. The built-in function <code>Map.add</code> adds a new binding (here, it replaces a previously existing binding) and returns the modified map. The program terminates, in the normal case, on its last expression which is its returned value (a pair containing <code>()</code> <em>the contract only modifies the storage</em> and the storage itself).</p>
<p><a href="https://github.com/OCamlPro/liquidity/blob/next/docs/liquidity.md">A reference manual for Liquidity is available here</a>. It gives a relatively complete overview of the available types, built-in functions and constructs of the language.</p>
<h2>Compilation</h2>
<h3>Encodings</h3>
<p>Because Liquidity is a lot richer than Michelson, some types and constructs must be simplified or encoded. <em>Record</em> types are translated to right-associated pairs with as many components as the record has fields. <code>t1</code> is encoded as <code>t1'</code> in the following example.</p>
<pre><code class="language-ocaml">type t1 = { a: int; b: string; c: bool}
type t1’ = (int * (string * bool))
</code></pre>
<p>Field accesses in a record is translated to accesses in the corresponding tuples (pairs). <em>Sum</em> (or union) types are translated using the built-in <code>variant</code> type (this is the <code>or</code> type in Michelson). <code>t2</code> is encoded as <code>t2'</code> in the following example.</p>
<pre><code class="language-ocaml">type ('a, 'b) variant = Left of 'a | Right of `b
type t2 = A of int | B of string | C
type t2’ = (int, (string, unit) variant) variant
</code></pre>
<p>Similarly, pattern matching on expressions of a sum type is translated to nested pattern matchings on variant typed expressions. An example translation is the following:</p>
<pre><code class="language-ocaml">match x with
| A i -> something1(i)
| B s -> something2(s)
| C -> something3
match x with
| Left i -> something1(i)
| Right r -> match r with
| Left s -> something2(s)
| Right -> something3
</code></pre>
<p>Liquidity also supports closures while Michelson only allows pure lambdas. Closures are translated by <em>lambda-lifting</em>, <em>i.e.</em> encoded as pairs whose first element is a lambda and second element is the closure environment. The resulting lambda takes as argument a pair composed of the closure's argument and environment. Adequate transformations are also performed for built-in functions that take lambdas as arguments ( <em>e.g.</em> in <code>List.map</code>) to allow closures instead.</p>
<h3>Compilation schema</h3>
<p>This little section is a bit more technical, so if you don't care how Liquidity is compiled precisely, you can skip over to the next one.</p>
<p>We note by Γ, [|<em>x</em>|]<sub><em>d</em></sub> ⊢ <em>X</em> ↑<sup><em>t</em></sup> compilation of the Liquidity instruction <em>x</em>, in environment Γ. Γ is a map associating variable names to a position in the stack. The compilation algorithm also maintains the size of the current stack (at compilation of instruction <em>x</em>), denoted by <em>d</em> in the previous expression. Below is a non-deterministic version of the compilation schema, the one implemented in the Liquidity compiler being a determinized version.</p>
<p><img src="/blog/assets/img/formula_compil_schema.png" alt="" /></p>
<p>The result of compiling <em>x</em> is a Michelson instruction (or sequence of instructions) <em>X</em> together with a Boolean transfer information <em>t</em>. The instruction <code>Contract.call</code> (or <code>TRANSFER_TOKENS</code> in Michelson) needs an empty stack to evaluate, so the compiler empties the stack before translating this call. However, the various branches of a Michelson program must have the same <em>stack type</em>. This is why we need to maintain this information so that the compiler can empty stacks in some parts of the program.</p>
<p>Some of the rules have parts annotated with ?<sub><em>b</em></sub>. This suffix denotes a potential reset or erasing. In particular:</p>
<ul>
<li>For sets, Γ?<sub>*b</em></sub> is ∅ if <em>b</em> evaluates to false, and Γ otherwise.
</li>
<li>For integers, *d</em>?<sub><em>b</em></sub> is <code>0</code> if <em>b</em> evaluates to false, and <em>d</em> otherwise.
</li>
<li>For instructions, (*X</em>)?<sub><em>b</em></sub> is <code>{}</code> if <em>b</em> evaluates to false, and <em>X</em> otherwise.
</li>
</ul>
<p>For instance, by looking at rule CONST, we can see that compiling a Liquidity constant simply consists in pushing this constant on the stack. To handle variables in a simple manner, the rule VAR tells us to look in the environment Γ for the index associated to the variable we want to compile. Then, instruction D(U)<sup>i</sup>P puts at the top of the stack a copy of the element present at depth <em>i</em>. Variables are added to Γ with the Liquidity instruction <code>let ... in</code> or with any instruction that binds an new symbol, like <code>fun</code> for instance.</p>
<h2>Decompilation from Michelson</h2>
<p>While Michelson programs are <em>high level</em> compared to other <em>bytecodes</em>, it remains difficult for a blockchain end-user to understand what a Michelson program does exactly by looking at it. However, following the idea that "code is law", a user should be able to read a contract and understand its precise semantic. Thus, we have developed a <em>decompiler</em> from Michelson to Liquidity, which allows to recover a much more readable and understandable representation of a program on the blockchain.</p>
<p>The decompilation of Michelson code follows the diagram below where:</p>
<ul>
<li><strong>Cleaning</strong> consists in simplifying Michelson code to accelerate the whole process and simplify the following task. For now it consists in ereasing instructions whose continuation is a failure.
</li>
<li><strong>Symbolic Execution</strong> consists in executing the Michelson program with symbolic inputs, and by replacing every value placed in the stacj by a node containing the instruction that generated it. Each node of this graph can be seen as an expression of the target program, which can be bound to a variable name. Edges to this node represent future occurrences of this variable.
</li>
<li><strong>Decompilation</strong> consists in transforming the graph generated by the previous step in a Liquidity syntax tree. Names for variables are recovered from annotations produced by the Liquidity compiler (in case we decompile a Michelson program that was generated from Liquidty), or are chosen on the fly when no annotation is present (<em>e.g.</em> if the Michelson program was written by hand).
</li>
</ul>
<p>Finally the program is typed (to ensure no mistakes were made), simplified and pretty printed.</p>
<p><img src="/blog/assets/img/diagram_decomp.png" alt="" /></p>
<h3>Example of decompilation</h3>
<pre><code class="language-Makefile">return int;
storage int;
code {DUP; CAR;
DIP { CDR; PUSH int 1 }; # stack is: parameter :: 1 :: storage
IF # if parameter = true
{ DROP; DUP; } # stack is storage :: storage
{ } # stack is 1 :: storage
;
PAIR;
}
</code></pre>
<p>This example illustrate some of the difficulties of the decompilation process: Liquidity is a purely functional language where each construction is an expression returning a value; Michelson handles the stack directly, which is impossible to concretize in in Liquidity (values in the stack don't have the same type, as opposed to values in a list). In this example, depending on the value of <code>parameter</code> the contract returns either the content of the storage, or the integer <code>1</code>. In the Michelson code, the programmer used the instruction <code>IF</code>, but its branches do not return a value and only operates by modifying (or not) the stack.</p>
<pre><code class="language-ocaml">[%%version 0.15]
type storage = int
let%entry main (parameter : bool) (storage : storage) : (int * storage) =
((if parameter then storage else 1 ), storage)
</code></pre>
<p>The above translation to Liquidity also contains an <code>if</code>, but it has to return a value. The graph below is the result of the <em>symbolic execution</em> phase on the Michelson program. The <code>IF</code> instruction is decomposed in several nodes, but does not contain any remaining instruction: the result of this <code>if</code> is in fact the difference between the stack resulting from the execution of the <code>then</code> branch and from the <code>else</code> branch. It is denoted by the node <code>N_IF_END_RESULT 0</code> (if there were multiple of these nodes with different indexes, the result of the <code>if</code> would have been a tuple, corresponding to the multiple changes in the stack).</p>
<p><img src="/blog/assets/img/graph_test6.png" alt="" /></p>
<h2>Try-Liquidity</h2>
<p>You can go to <a href="http://liquidity-lang.org/edit">https://liquidity-lang.org/edit</a> to try out Liquidity in your browser.</p>
<p>The first thing to do (if you want to deploy and interact with a contract) is to go into the settings menu. There you can set your Tezos private key (use one that you generated for the alphanet for the moment) or the source (<em>i.e.</em> your public key hash, which is derived from your private key if you set it).</p>
<p>You can also change which Tezos node you want to interact with (the first one should do, but you can also set one of your choosing such as one running locally on your machine). The timestamp shown next to the node name indicates how long ago was produced the last block that it knows of. Transactions that you make on a node that is not synchronized will not be included in the main chain.</p>
<p><img src="/blog/assets/img/screenshot_liqedit_settings.png" alt="" />
You should now see your account with its balance in the top bar:</p>
<p><img src="/blog/assets/img/screenshot_liqedit_account.png" alt="" /></p>
<p>In the main editor window, you can select a few Liquidity example contracts or write your own. For this small tutorial, we will select <code>multisig.liq</code> which is a minimal multi-signature wallet contract. It allows anyone to send money to it, but it requires a certain number of predefined owners to agree before making a withdrawal.</p>
<p>Clicking on the button <kbd>Compile</kbd> should make the editor blink green (when there are no errors) and the compiled Michelson will appear on the editor on the right.</p>
<p><img src="/blog/assets/img/screenshot_liqedit_editor.png" alt="" />
Let's now deploy this contract on the Tezos alphanet. By going into the <strong>Deploy</strong> (or paper airplane icon) tab, we can choose our set of owners for the multisig contract and the minimum number of owners to be in agreement before a withdrawal can proceed. Here I put down two account addresses for which I possess the private keys, and I want the two owners to agree before any transaction is approved (<code>2p</code> is the natural number 2).</p>
<p><img src="/blog/assets/img/screenshot_liqedit_deploy.png" alt="" />
Then I can either forge the deployment operation which is then possible to sign offline and inject in the Tezos chain by other means, or I can directly deploy this contract (if the private key is set in settings). If deployment is successful, we can see both the deployment operation and the new contract on a block explorer by clicking on the provided links.</p>
<p>Now we can query the blockchain to examine our newly deployed contract. Head over to the <strong>Examine</strong> tab. The address field should already be filled with our contract handle. We just have to click on <kbd>Retrieve balance and storage</kbd>.</p>
<p><img src="/blog/assets/img/screenshot_liqedit_examine1.png" alt="" />
The contract has 3tz on its balance because we chose to initialize it this way. On the right is the current storage of the contract (in Liquidity syntax) which is a record with four fields. Notice that the <code>actions</code> field is an empty map.</p>
<p>Let's make a few calls to this contract. Head over to the <strong>Call</strong> tab and fill-in the parameter and the amount. We can send for instance 5.00tz with the parameter <code>Pay</code>. Clicking on the button <kbd>Call</kbd> generates a transaction which we can observe on a block explorer. More importantly if we go back to the <strong>Examine</strong> tab, we can now retrieve the new information and see that the storage is unchanged but the balance is 8.00tz.</p>
<p>We can also make a call to withdraw money from the contract. This is done by passing a parameter of the form:</p>
<pre><code class="language-ocaml">Manage (
Some {
destination = tz1brR6c9PY3SSfBDu7Qxdhsz3pvNRDwf68a;
amount = 2tz;
})
</code></pre>
<p>This is a proposition of transfer of funds in the amount of 2.00tz from the contract to the destination <code>tz1brR6c9PY3SSfBDu7Qxdhsz3pvNRDwf68a</code>.</p>
<p><img src="/blog/assets/img/screenshot_liqedit_call1.png" alt="" /></p>
<p>The balance of the contract has not changed (it is still 8.00tz) but the storage has been modified. That is because this multisig contract requires two owners to agree before proceeding. The proposition is stored in the map <code>actions</code> and is associated to the owner who made said proposition.</p>
<pre><code class="language-ocaml">{
owners =
(Set
[tz1XT2pgiSRWQqjHv5cefW7oacdaXmCVTKrU;
tz1brR6c9PY3SSfBDu7Qxdhsz3pvNRDwf68a]);
actions =
(Map
[(tz1brR6c9PY3SSfBDu7Qxdhsz3pvNRDwf68a,
{
destination = tz1brR6c9PY3SSfBDu7Qxdhsz3pvNRDwf68a;
amount = 2.00tz
})]);
owners_length = 2p;
min_agree = 2p
}
</code></pre>
<pre><code></code></pre>
<p>We can now open a new browser tab and point it to <a href="http://liquidity-lang.org/edit">https://liquidity-lang.org/edit</a>, but this time we fill in the private key for the second owner <code>tz1XT2pgiSRWQqjHv5cefW7oacdaXmCVTKrU</code>. We choose the multisig contract in the Liquidity editor and fill-in the contract address in the call tab with the same one as in the other session <code>TZ1XvTpoSUeP9zZeCNWvnkc4FzuUighQj918</code> (you can double check the icons for the two contracts are identical). For the the withdrawal to proceed, this owner has to make the exact same proposition so let's make a call with the same parameter:</p>
<pre><code class="language-ocaml">Manage (
Some {
destination = tz1brR6c9PY3SSfBDu7Qxdhsz3pvNRDwf68a;
amount = 2tz;
})
</code></pre>
<p>The call should also succeed. When we examine the contract, we can now see that its balance is down to 6.00tz and that the field <code>actions</code> of its storage has been reinitialized to the empty map. In addition, we can update the balance of our first account (by clicking on the circle arrow icon in the tob bar) to see that it is now up an extra 2.00tz and that was the destination of the proposed (and agreed on) withdrawal. All is well!</p>
<p>We have seen how to compile, deploy, call and examine Liquidity contracts on the Tezos alphanet using our online editor. Experiment with your own contracts and let us know how that works for you!</p>
<ul>
<li>Slides in <a href="https://files.ocamlpro.com/pub/liquidity_slides.en_.pdf">English</a>
</li>
<li>and <a href="https://files.ocamlpro.com/pub/liquidity_slides.pdf">French</a>
</li>
</ul>
<h1>Comments</h1>
<p>fredcy (9 February 2018 at 3 h 14 min):</p>
<blockquote>
<p>It says “Here we define a type abbreviation votes […]” but I don’t see any <code>votes</code> symbol in the nearby code.</p>
<p>[Still working through the document. I’m eager to try Liquidity rather than write in Michelson.]</p>
</blockquote>
<p>alain (9 February 2018 at 7 h 18 min):</p>
<blockquote>
<p>You are right, thanks for catching this. I’ve updated the contract code to use type <code>votes</code>.</p>
</blockquote>
<p>branch (26 February 2019 at 18 h 28 min):</p>
<blockquote>
<p>Why the “Deploy” button can be inactive, while liquidity contract is compiled successfully?</p>
</blockquote>
<p>alain (6 March 2019 at 15 h 09 min):</p>
<blockquote>
<p>For the Deploy button to become active, you need to specify an initial value for the storage directly in the code of the smart contract. This can be done by writing a constant directly or a function.</p>
</blockquote>
<pre><code class="language-sourcecode">let%init storage = (* the value of your initial storage*)
let%init storage x y z = (* the value of your initial storage, function of x, y and z *)
</code></pre>
opam 2.0.0 Release Candidate 1 is out!https://ocamlpro.com/blog/2018_02_02_opam_2.0.0_release_candidate_1_is_out2018-02-02T08:12:13Z2018-02-02T08:12:13Z
Louis Gesbert
We are pleased to announce a first release candidate for the long-awaited opam 2.0.0. A lot of polishing has been done since the last beta, including tweaks to the built-in solver, allowing in-source package definitions to be gathered in an opam/ directory, and much more. With all of the 2.0.0 featu...<p>We are pleased to announce a first release candidate for the long-awaited opam 2.0.0.</p>
<p>A lot of polishing has been done since the <a href="https://opam.ocaml.org/blog/opam-2-0-beta5/">last beta</a>, including tweaks to the built-in solver, allowing in-source package definitions to be gathered in an <code>opam/</code> directory, and much more.</p>
<p>With all of the 2.0.0 features getting pretty solid, we are now focusing on bringing all the guides up-to-date<a href="#foot-1">¹</a>, updating the tools and infrastructure, making sure there are no usability issues with the new workflows, and being future-proof so that further updates break as little as possible.</p>
<p>You are invited to read the <a href="https://opam.ocaml.org/blog/opam-2-0-beta5/">beta5 announcement</a> for details on the 2.0.0 features. Installation instructions haven't changed:</p>
<ol>
<li>From binaries: run
</li>
</ol>
<pre><code class="language-shell-session">sh <(curl -sL https://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh)
</code></pre>
<p>or download manually from <a href="https://github.com/ocaml/opam/releases/tag/2.0.0-rc">the Github "Releases" page</a> to your PATH.</p>
<ol start="2">
<li>From source, using opam:
</li>
</ol>
<pre><code class="language-shell-session">opam update; opam install opam-devel
</code></pre>
<p>(then copy the opam binary to your PATH as explained)</p>
<ol start="3">
<li>From source, manually: see the instructions in the <a href="https://github.com/ocaml/opam/tree/2.0.0-rc#opam---a-package-manager-for-ocaml">README</a>.
</li>
</ol>
<p>Thanks a lot for testing out the RC and <a href="https://github.com/ocaml/opam/issues">reporting</a> any issues you may find. See <a href="https://opam.ocaml.org/blog/opam-2-0-beta5/#What-we-need-tested">what we need tested</a> for more detail.</p>
<hr />
<p><a id="foot-1">¹</a> You can at the moment rely on the <a href="http://opam.ocaml.org/doc/2.0/man/opam.html">manpages</a>, the <a href="http://opam.ocaml.org/doc/2.0/Manual.html">Manual</a>, and of course the <a href="http://opam.ocaml.org/doc/2.0/api/">API</a>, but other pages might be outdated.</p>
2017 at OCamlProhttps://ocamlpro.com/blog/2018_01_15_2017_at_ocamlpro2018-01-15T08:12:13Z2018-01-15T08:12:13Z
Muriel
Since 2017 is just over, now is probably the best time to review what happened during this hectic year at OCamlPro… Here are our big 2017 achievements, in the world of blockchains (the Liquidity smart contract language, Tezos and the Tezos ICO etc.), of OCaml (with OPAM 2, flambda 2 etc.), and of ...<p>Since 2017 is just over, now is probably the best time to review what happened during this hectic year at OCamlPro… Here are our big 2017 achievements, in the world of <a href="#blockchain"><strong>blockchains</strong></a> <em>(the <a href="#liquidity">Liquidity</a> smart contract language, <a href="#tezos">Tezos</a> and the Tezos ICO etc.)</em>, of <strong>OCaml</strong> (with <em><a href="#opam">OPAM</a> 2</em>, <a href="#flambda"><em>flambda</em></a> 2 etc.), and of <a href="#formalmethods"><strong>formal methods</strong></a> (<a href="#altergo"><em>Alt-Ergo</em></a> etc.)</p>
<h2>In the World of Blockchains</h2>
<h3>The Liquidity Language for smart contracts</h3>
<p><em>Work of Alain Mebsout, Fabrice Le Fessant, Çagdas Bozman, Michaël Laporte</em></p>
<p><img src="/blog/assets/img/logo_liquidity_small.png" alt="Liquidity" /></p>
<p>OCamlPro develops <a href="https://www.liquidity-lang.org/">Liquidity</a>, a high level smart contract language for Tezos. <a id="liquidity"></a>Liquidity is a human-readable language, purely functional, statically-typed, whose syntax is very close to the OCaml syntax. Programs can be compiled to the stack-based language (Michelson) of the Tezos blockchain.</p>
<p>To garner interest and adoption, we developed an online editor called "<a href="https://www.liquidity-lang.org/edit">Try Liquidity</a>". Smart-contract developers can design contracts interactively, directly in the browser, compile them to Michelson, run them and deploy them on the alphanet network of Tezos.</p>
<p>Future plans include a full-fledged web-based IDE for Liquidity. Worth mentioning is a neat feature of Liquidity: decompiling a Michelson program back to its Liquidity version, whether it was generated from Liquidity code or not. In practice, this allows to easily read somewhat obfuscated contracts already deployed on the blockchain.</p>
<h3>Tezos and the Tezos ICO</h3>
<p><em>Work of Grégoire Henry, Benjamin Canou, Çagdas Bozman, Alain Mebsout, Michael Laporte, Mohamed Iguernlala, Guillem Rieu, Vincent Bernardoff (for DLS) and at times all the OCamlPro team in animated and joyful brainstorms.</em></p>
<p><img src="/blog/assets/img/logo_tezos_tz.png" alt="tezos" /></p>
<p>Since 2014, the OCamlPro team had been co-designing the Tezos prototype with Arthur Breitman based on Arthur's <a href="https://www.tezos.com/static/papers/white_paper.pdf">White Paper</a>, and had undertaken the implementation of the Tezos node and client. A technical prowess and design achievement we have been proud of. In 2017, we developed the infrastructure for the Tezos ICO (Initial Coin Offering) from the ground up, encompassing the web app (back-end and front-end), the Ethereum and Bitcoin (p2sh) multi-signature contracts, as well as the hardware Ledger based process for transferring funds. The ICO, conducted in collaboration with Arthur, was a resounding success — the equivalent of 230 million dollars (in ETH and BTC) at the time were raised for the Tezos Foundation!</p>
<p><a id="opam"></a><em>This work was allowed thanks to Arthur Breitman and DLS's funding.</em></p>
<h2>In the World of OCaml</h2>
<h3>Towards OPAM 2.0, the OCaml Package manager</h3>
<p><img src="/blog/assets/img/logo_opam_300_261.png" alt="opam" /></p>
<p><a href="https://opam.ocaml.org/blog/opam-2-0-beta5/">OPAM</a> was born at Inria/OCamlPro with Frederic, Thomas and Louis, and is still maintained here at OCamlPro. Now thanks to Louis Gesbert's thorough efforts and the OCaml Labs contribution, OPAM 2.0 is coming !</p>
<ul>
<li>opam is now compiled with a built-in solver, improving the portability, ease of access and user experience (<code>aspcud</code> no longer a requirement)
</li>
<li>new workflows for developers have been designed, including convenient ways to test and install local sources, more reliable ways to share development setups
</li>
<li>the general system has seen a large number of robustness and expressivity improvements, like <a href="https://opam.ocaml.org/blog/opam-extended-dependencies/">extended dependencies</a>
</li>
<li>it also provides better caching, and many hooks enabling, among others, setups with sandboxed builds, binary artifacts caching, or end-to-end package signature verification.
</li>
</ul>
<p><a id="flambda"></a>More details: on <a href="https://opam.ocaml.org/blog">https://opam.ocaml.org/blog</a> and releases on <a href="https://github.com/ocaml/opam/releases">https://github.com/ocaml/opam/releases</a></p>
<p><em>This work is allowed thanks to JaneStreet's funding.</em></p>
<h3>Flambda Compilation</h3>
<p><em>Work of Pierre Chambart, Vincent Laviron</em></p>
<p><img src="/blog/assets/img/logo_ocaml.png" alt="flambda" /></p>
<p>Pierre and Vincent's considerable work on Flambda 2 (the optimizing intermediate representation of the OCaml compiler – on which inlining occurs), in close cooperation with JaneStreet's team (Mark, Leo and Xavier) aims at overcoming some of flambda's limitations. This huge refactoring will help make OCaml code more maintainable, improving its theoretical grounds. Internal types are clearer, more concise, and possible control flow transformations are more flexible. Overall a precious outcome for industrial users.</p>
<p><a id="hpux"></a><em>This work is allowed thanks to JaneStreet's funding.</em></p>
<h3>OCaml for ia64-HPUX</h3>
<p>In 2017, OCamlPro also worked on porting OCaml on HPUX-ia64. This came from a request of CryptoSense, a French startup working on an OCaml tool to secure cryptographic protocols. OCaml had a port on Linux-ia64, that was deprecated before 4.00.0 and even before, a port on HPUX, but not ia64. So, we expected the easiest part would be to get the bytecode version running, and the hardest part to get access to an HPUX-ia64 computer: it was quite the opposite, HPUX is an awkward system where most tools (shell, make, etc.) have uncommon behaviors, which made even compiling a bytecode version difficult. On the contrary, it was actually easy to get access to a low-power virtual machine of HPUX-ia64 on a monthly basis. Also, we found a few bugs in the former OCaml ia64 backend, mostly caused by the scheduler, since ia64 uses explicit instruction parallelism. Debugging such code was quite a challenge, as instructions were often re-ordered and interleaved. Finally, after a few weeks of work, we got both the bytecode and native code versions running, with only a few limitations.</p>
<p><em>This work was mandated by CryptoSense.</em></p>
<h3>The style-checker Typerex-lint</h3>
<p><em>Work of Çagdas Bozman, Michael Laporte and Clément Dluzniewski.</em></p>
<p>In 2017, typerex-lint has been improved and extended. Typerex-lint is a style-checker to analyze the sources of OCaml programs, and can be extended using plugins. It allows to automatically check the conformance of a code base to some coding rules. We added some analysis to look for code that doesn't comply with the recommendations made by the SecurOCaml project members. We also made an interactive web output that provides an easy way to navigate in typerex-lint results.</p>
<h3>Build systems and tools</h3>
<p><em>Work of Fabrice Le Fessant</em></p>
<p>Every year in the OCaml world, a new build tool appears. 2017 was not different, with the rise of jbuild/dune. jbuild came with some very nice features, some of which were already in our home-made build tool, ocp-build, like the ability to build multiple packages at once in a composable way, some other ones were new, like the ability to build multiple versions of the package in one run or the wrapping of libraries using module aliases. We have started to incorporate some of these features in ocp-build. Nevertheless, from our point of view, the two tools belong to two different families: jbuild/dune belongs to the "implicit" family, like ocamlbuild and oasis, with minimal project description; ocp-build belongs to the "explicit" family, like make and omake. We prefer the explicit family, because the build file acts as a description of the project, an entry point to understand the project and the modules. Also, we have kept working on improving the project description language for ocp-build, something that we think is of utmost importance. Latest release: ocp-build 1.99.20-beta.</p>
<h3><a id="formalmethods"></a><a id="altergo"></a>Other contributions and software</h3>
<ul>
<li>OCaml bugfixes by Pierre Chambart, Vincent Laviron, and other members of the team.
</li>
<li>The ocp-analyzer prototype by Vincent Laviron
</li>
</ul>
<h2>In the World of Formal Methods</h2>
<h3>Alt-Ergo</h3>
<p><em>By Mohamed Iguernlala</em></p>
<p><img src="/blog/assets/img/logo_alt_ergo.png" alt="alt-ergo" /></p>
<p>For <a href="https://alt-ergo.ocamlpro.com/">Alt-Ergo</a>, 2017 was the year of floating-point arithmetic reasoning. Indeed, in addition to the publication of our <a href="https://hal.inria.fr/hal-01522770/document">results</a> at the 29th International Conference on Computer Aided Verification (CAV), Jul 2017, we polished the prototype we started in 2016 and integrated it in the main branch. This is a joint work with Sylvain Conchon (Paris-Saclay University) and Guillaume Melquiond (Inria Lab) in the context of the <a href="https://soprano-project.fr/index.html">SOPRANO ANR Project</a>. Another big piece of work in 2017 consisted in investigating a better integration of an efficient CDCL-based SAT solver in Alt-Ergo. In fact, although modern CDCL SAT solvers are very fast, their interaction with the decision procedures and quantifiers instantiation should be finely tuned to get good results in the context of Satisfiability Modulo Theories. This new solver should be integrated into Alt-Ergo in the next few weeks. This work has been done in the context of the <a href="https://www.clearsy.com/projet-lchip-architecture-double-processeur-premier-starter-kit/">LCHIP FUI Project</a>.</p>
<p>We also released a new major version of Alt-Ergo (2.0.0) with a modification in the licensing scheme. Alt-Ergo@OCamlPro's development repository is now made public. This will allow users to get updates and bugfixes as soon as possible.</p>
<h3>Towards a formalized type system for OCaml</h3>
<ul>
<li><em>Work of Pierrick Couderc, Grégoire Henry, Fabrice Le Fessant and Michel Mauny (Inria Paris)</em>
</li>
</ul>
<p>OCaml is known for its rich type system and strong type inference, unfortunately such complex type engine is prone to errors, and it can be hard to come up with clear idea of how typing works for some features of the language. For 3 years now, OCamlPro has been working on formalizing a subset of this type system and implementing a <a href="https://github.com/OCamlPro/ocp-typechecker">type checker</a> derived from this formalization. The idea behind this work is to help the compiler developers ensure some form of correctness of the inference. This type checker takes a Typedtree, the intermediate representation resulting from the inference, and checks its consistency. Put differently, this tool checks that each annotated node from the Typedtree can be indeed given such a type according to the context, its form and its sub-expressions. In practice, we could check and catch some known bugs resulting from unsound programs that were accepted by the compiler.</p>
<p>This type checker is only available for OCaml 4.02 for the moment, and the document describing this formalized type system will be available shortly in a PhD thesis, by Pierrick Couderc.</p>
<h2>Around the World</h2>
<p>OCamlPro's team members attended many events throughout the world:</p>
<ul>
<li>The <a href="https://conf.researchr.org/home/icfp-2017">ICFP'2017</a> (Oxford)
</li>
<li>The <a href="https://jfla.inria.fr/2017/">JFLA'2017</a> (Gourette, Pyrénées)
</li>
<li>The <a href="https://cavconference.org/2017/">CAV'2017</a> (29th International Conference on Computer Aided Verification, Heidelberg)
</li>
<li>The <a href="https://www.opensourcesummit.paris/Bienvenue_150.html">POSS'2017</a> (Paris)
</li>
</ul>
<p>As a member committed to the OCaml ecosystem's animation, we've organized OCaml meetups too (see the famous <a href="https://www.meetup.com/fr-FR/ocaml-paris/">OUPS</a> meetups in Paris!).</p>
<h2>A few hints about what's ahead for OCamlPro</h2>
<p>Let's keep up the good work!</p>
opam 2.0 Beta5 is out!https://ocamlpro.com/blog/2017_11_27_opam_2.0_beta5_is_out2017-11-27T08:12:13Z2017-11-27T08:12:13Z
Louis Gesbert
After a few more months brewing, we are pleased to announce a new beta release of opam. With this new milestone, opam is reaching feature-freeze, with an expected 2.0.0 by the beginning of next year. This version brings many new features, stability fixes, and big improvements to the local developmen...<p>After a few more months brewing, we are pleased to announce a new beta release
of opam. With this new milestone, opam is reaching feature-freeze, with an
expected 2.0.0 by the beginning of next year.</p>
<p>This version brings many new features, stability fixes, and big improvements to
the local development workflows.</p>
<h2>What's new</h2>
<p>The features presented in past announcements:
<a href="http://opam.ocaml.org/blog/opam-local-switches/">local switches</a>,
<a href="http://opam.ocaml.org/blog/opam-install-dir/">in-source package definition handling</a>,
<a href="http://opam.ocaml.org/blog/opam-extended-dependencies/">extended dependencies</a>
are of course all present. But now, all the glue to make them interact nicely
together is here to provide new smooth workflows. For example, the following
command, if run from the source tree of a given project, creates a local switch
where it will restore a precise installation, including explicit versions of all
packages and pinnings:</p>
<pre><code class="language-shell-session">opam switch create ./ --locked
</code></pre>
<p>this leverages the presence of <code>opam.locked</code> or <code><name>.opam.locked</code> files,
which are valid package definitions that contain additional details of the build
environment, and can be generated with the
<a href="https://github.com/AltGr/opam-lock"><code>opam-lock</code> plugin</a> (the <code>lock</code> command may
be merged into opam once finalised).</p>
<p>But this new beta also provides a large amount of quality of life improvements,
and other features. A big one, for example, is the integration of a built-in
solver (derived from <a href="http://www.i3s.unice.fr/~cpjm/misc/mccs.html"><code>mccs</code></a> and
<a href="https://www.gnu.org/software/glpk/"><code>glpk</code></a>). This means that the <code>opam</code> binary
works out-of-the box, without requiring the external
<a href="https://www.cs.uni-potsdam.de/wv/aspcud/"><code>aspcud</code></a> solver, and on all
platforms. It is also faster.</p>
<p>Another big change is that detection of architecture and OS details is now done
in opam, and can be used to select the external dependencies with the new format
of the <a href="http://opam.ocaml.org/doc/2.0/Manual.html#opamfield-depexts"><code>depexts:</code></a>
field, but also to affect dependencies or build flags.</p>
<p>There is much more to it. Please see the
<a href="https://github.com/ocaml/opam/blob/2.0.0-beta5/CHANGES">changelog</a>, and the
<a href="http://opam.ocaml.org/doc/2.0/Manual.html">updated manual</a>.</p>
<h2>How to try it out</h2>
<p>Our warm thanks for trying the new beta and
<a href="https://github.com/ocaml/opam/issues">reporting</a> any issues you may hit.</p>
<p>There are three main ways to get the update:</p>
<ol>
<li>The easiest is to use our pre-compiled binaries.
<a href="https://github.com/ocaml/opam/blob/master/shell/opam_installer.sh">This script</a>
will also make backups if you migrate from 1.x, and has an option to revert
back:
</li>
</ol>
<pre><code class="language-shell-session">sh <(curl -sL https://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh)
</code></pre>
<p>This uses the binaries from https://github.com/ocaml/opam/releases/tag/2.0.0-beta5</p>
<ol start="2">
<li>Another option is to compile from source, using an existing opam
installation. Simply run:
</li>
</ol>
<pre><code class="language-shell-session">opam update; opam install opam-devel
</code></pre>
<p>and follow the instructions (you will need to copy the compiled binary to
your PATH).</p>
<ol start="3">
<li>
<p>Compiling by hand from the
<a href="https://github.com/ocaml/opam/releases/download/2.0.0-beta5/opam-full-2.0.0-beta5.tar.gz">inclusive source archive</a>,
or from the <a href="https://github.com/ocaml/opam/tree/2.0.0-beta5">git repo</a>. Use
<code>./configure && make lib-ext && make</code> if you have OCaml >= 4.02.3 already
available; <code>make cold</code> otherwise.</p>
<p>If the build fails after updating a git repo from a previous version, try
<code>git clean -fdx src/</code> to remove any stale artefacts.</p>
</li>
</ol>
<p>Note that the repository format is different from that of opam 1.2. Opam 2 will
be automatically redirected from the
<a href="https://github.com/ocaml/opam-repository">opam-repository</a> to an automatically
rewritten 2.0 mirror, and is otherwise able to do the conversion on the fly
(both for package definitions when pinning, and for whole repositories). You may
not yet contribute packages in 2.0 format to opam-repository, though.</p>
<h2>What we need tested</h2>
<p>We are interested in all opinions and reports, but here are a few areas where
your feedback would be specially useful to us:</p>
<ul>
<li>Use 2.0 day-to-day, in particular check any packages you may be maintaining.
We would like to ensure there are no regressions due to the rewrite from 1.2
to 2.0.
</li>
<li>Check the quality of the solutions provided by the solver (or conflicts, when
applicable).
</li>
<li>Test the different pinning mechanisms (rsync, git, hg, darcs) with your
project version control systems. See the <code>--working-dir</code> option.
</li>
<li>Experiment with local switches for your project (and/or <code>opam install DIR</code>).
Give us feedback on the workflow. Use <code>opam lock</code> and share development
environments.
</li>
<li>If you have any custom repositories, please try the conversion to 2.0 format
with <code>opam admin upgrade --mirror</code> on them, and use the generated mirror.
</li>
<li>Start porting your CI systems for larger projects to use opam 2, and give us
feedback on any improvements you need for automated scripting (e.g. the
<code>--json</code> output).
</li>
</ul>
new opam features: more expressive dependencieshttps://ocamlpro.com/blog/2017_05_11_new_opam_features_more_expressive_dependencies2017-05-11T08:12:13Z2017-05-11T08:12:13Z
Louis Gesbert
This blog will cover yet another aspect of the improvements opam 2.0 has over opam 1.2. I may be a little more technical than previous issues, as it covers a feature directed specifically at packagers and repository maintainers, and regarding the package definition format. Specifying dependencies in...<p>This blog will cover yet another aspect of the improvements opam 2.0 has over opam 1.2. I may be a little more technical than previous issues, as it covers a feature directed specifically at packagers and repository maintainers, and regarding the package definition format.</p>
<h3>Specifying dependencies in opam 1.2</h3>
<p>Opam 1.2 already has an advanced way of specifying package dependencies, using formulas on packages and versions, with the following syntax:</p>
<pre><code class="language-shell-session"> depends: [
"foo" {>= "3.0" & < "4.0~"}
( "bar" | "baz" {>= "1.0"} )
]
</code></pre>
<p>meaning that the package being defined depends on both package <code>foo</code>, within the <code>3.x</code> series, and one of <code>bar</code> or <code>baz</code>, the latter with version at least <code>1.0</code>. See <a href="https://opam.ocaml.org/doc/Manual.html#PackageFormulas">here</a> for a complete documentation.</p>
<p>This only allows, however, dependencies that are static for a given package.</p>
<p>Opam 1.2 introduced <code>build</code>, <code>test</code> and <code>doc</code> "dependency flags" that could provide some specifics for dependencies (<em>e.g.</em> <code>test</code> dependencies would only be needed when tests were requested for the package). These were constrained to appear before the version constraints, <em>e.g.</em> <code>"foo" {build & doc & >= "3.0"}</code>.</p>
<h3>Extensions in opam 2.0</h3>
<p>Opam 2.0 generalises the dependency flags, and makes the dependencies specification more expressive by allowing to mix <em>filters</em>, <em>i.e.</em> formulas based on opam variables, with the version constraints. If that formula holds, the dependency is enforced, if not, it is discarded.</p>
<p>This is documented in more detail <a href="https://opam.ocaml.org/doc/2.0/Manual.html#Filteredpackageformulas">in the opam 2.0 manual</a>.</p>
<p>Note also that, since the compilers are now packages, the required OCaml version is now expressed using this mechanism as well, through a dependency to the (virtual) package <code>ocaml</code>, <em>e.g.</em> <code>depends: [ "ocaml" {>= "4.03.0"} ]</code>. This replaces uses of the <code>available:</code> field and <code>ocaml-version</code> switch variable.</p>
<h4>Conditional dependencies</h4>
<p>This makes it trivial to add, for example, a condition on the OS to a given dependency, using the built-in variable <code>os</code>:</p>
<pre><code class="language-shell-session">depends: [ "foo" {>= "3.0" & < "4.0~" & os = "linux"} ]
</code></pre>
<p>here, <code>foo</code> is simply not needed if the OS isn't Linux. We could also be more specific about other OSes using more complex formulas:</p>
<pre><code class="language-shell-session"> depends: [
"foo" { "1.0+linux" & os = "linux" |
"1.0+osx" & os = "darwin" }
"bar" { os != "osx" & os != "darwin" }
]
</code></pre>
<p>Meaning that Linux and OSX require <code>foo</code>, respectively versions <code>1.0+linux</code> and <code>1.0+osx</code>, while other systems require <code>bar</code>, any version.</p>
<h4>Dependency flags</h4>
<p>Dependency flags, as used in 1.2, are no longer needed, and are replaced by variables that can appear anywhere in the version specification. The following variables are typically useful there:</p>
<ul>
<li><code>with-test</code>, <code>with-doc</code>: replace the <code>test</code> and <code>doc</code> dependency flags, and are <code>true</code> when the package's tests or documentation have been requested
</li>
<li>likewise, <code>build</code> behaves similarly as before, limiting the dependency to a "build-dependency", implying that the package won't need to be rebuilt if the dependency changes
</li>
<li><code>dev</code>: this boolean variable holds <code>true</code> on "development" packages, that is, packages that are bound to a non-stable source (a version control system, or if the package is pinned to an archive without known checksum). <code>dev</code> sources often happen to need an additional preliminary step (e.g. <code>autoconf</code>), which may have its own dependencies.
</li>
</ul>
<p>Use <code>opam config list</code> for a list of pre-defined variables. Note that the <code>with-test</code>, <code>with-doc</code> and <code>build</code> variables are not available everywhere: the first two are allowed only in the <code>depends:</code>, <code>depopts:</code>, <code>build:</code> and <code>install:</code> fields, and the latter is specific to the <code>depends:</code> and <code>depopts:</code> fields.</p>
<p>For example, the <code>datakit.0.9.0</code> package has:</p>
<pre><code class="language-shell-session">depends: [
...
"datakit-server" {>= "0.9.0"}
"datakit-client" {with-test & >= "0.9.0"}
"datakit-github" {with-test & >= "0.9.0"}
"alcotest" {with-test & >= "0.7.0"}
]
</code></pre>
<p>When running <code>opam install datakit.0.9.0</code>, the <code>with-test</code> variable is set to <code>false</code>, and the <code>datakit-client</code>, <code>datakit-github</code> and <code>alcotest</code> dependencies are filtered out: they won't be required. With <code>opam install datakit.0.9.0 --with-test</code>, the <code>with-test</code> variable is true (for that package only, tests on packages not listed on the command-line are not enabled!). In this case, the dependencies resolve to:</p>
<pre><code class="language-shell-session">depends: [
...
"datakit-server" {>= "0.9.0"}
"datakit-client" {>= "0.9.0"}
"datakit-github" {>= "0.9.0"}
"alcotest" {>= "0.7.0"}
]
</code></pre>
<p>which is treated normally.</p>
<h4>Computed versions</h4>
<p>It is also possible to use variables, not only as conditions, but to compute the version values: <code>"foo" {= var}</code> is allowed and will require the version of package <code>foo</code> corresponding to the value of variable <code>var</code>.</p>
<p>This is useful, for example, to define a family of packages, which are released together with the same version number: instead of having to update the dependencies of each package to match the common version at each release, you can leverage the <code>version</code> package-variable to mean "that other package, at the same version as current package". For example, <code>foo-client</code> could have the following:</p>
<pre><code class="language-shell-session">depends: [ "foo-core" {= version} ]
</code></pre>
<p>It is even possible to use variable interpolations within versions, <em>e.g.</em> specifying an os-specific version differently than above:</p>
<pre><code class="language-shell-session">depends: [ "foo" {= "1.0+%{os}%"} ]
</code></pre>
<p>this will expand the <code>os</code> variable, resolving to <code>1.0+linux</code>, <code>1.0+darwin</code>, etc.</p>
<p>Getting back to our <code>datakit</code> example, we could leverage this and rewrite it to the more generic:</p>
<pre><code class="language-shell-session">depends: [
...
"datakit-server" {>= version}
"datakit-client" {with-test & >= version}
"datakit-github" {with-test & >= version}
"alcotest" {with-test & >= "0.7.0"}
]
</code></pre>
<p>Since the <code>datakit-*</code> packages follow the same versioning, this avoids having to rewrite the opam file on every new version, with a risk of error each time.</p>
<p>As a side note, these variables are consistent with what is now used in the <a href="http://opam.ocaml.org/doc/2.0/Manual.html#opamfield-build"><code>build:</code></a> field, and the <a href="http://opam.ocaml.org/doc/2.0/Manual.html#opamfield-build-test"><code>build-test:</code></a> field is now deprecated. So this other part of the same <code>datakit</code> opam file:</p>
<pre><code class="language-shell-session">build:
["ocaml" "pkg/pkg.ml" "build" "--pinned" "%{pinned}%" "--tests" "false"]
build-test: [
["ocaml" "pkg/pkg.ml" "build" "--pinned" "%{pinned}%" "--tests" "true"]
["ocaml" "pkg/pkg.ml" "test"]
]
</code></pre>
<p>would now be preferably written as:</p>
<pre><code class="language-shell-session">build: ["ocaml" "pkg/pkg.ml" "build" "--pinned" "%{pinned}%" "--tests" "%{with-test}%"]
run-test: ["ocaml" "pkg/pkg.ml" "test"]
</code></pre>
<p>which avoids building twice just to change the options.</p>
<h4>Conclusion</h4>
<p>Hopefully this extension to expressivity in dependencies will make the life of packagers easier; feedback is welcome on your personal use-cases.</p>
<p>Note that the official repository is still in 1.2 format (served as 2.0 at <code>https://opam.ocaml.org/2.0</code>, through automatic conversion), and will only be migrated a little while after opam 2.0 is finally released. You are welcome to experiment on custom repositories or pinned packages already, but will need a little more patience before you can contribute package definitions making use of the above to the <a href="https://github.com/ocaml/opam-repository">official repository</a>.</p>
<blockquote>
<p>NOTE: this article is cross-posted on <a href="https://opam.ocaml.org/blog/">opam.ocaml.org</a> and <a href="/blog">ocamlpro.com</a>.</p>
</blockquote>
new opam features: "opam install DIR"https://ocamlpro.com/blog/2017_05_04_new_opam_features_opam_install_dir2017-05-04T08:12:13Z2017-05-04T08:12:13Z
Louis Gesbert
After the opam build feature was announced followed a lot of discussions, mainly having to do with its interface, and misleading name. The base features it offered, though, were still widely asked for: a way to work directly with the project in the current directory, assuming it contains definitions...<p>After the <a href="/blog/2017_03_16_new_opam_features_opam_build">opam build</a> feature was announced followed a lot of discussions, mainly having to do with its interface, and misleading name. The base features it offered, though, were still widely asked for:</p>
<ul>
<li>a way to work directly with the project in the current directory, assuming it contains definitions for one or more packages
</li>
<li>a way to copy the installed files of a package below a specified <code>destdir</code>
</li>
<li>an easier way to get started hacking on a project, even without an initialised opam
</li>
</ul>
<h3>Status of <code>opam build</code></h3>
<p><code>opam build</code>, as described in a <a href="/blog/2017_03_16_new_opam_features_opam_build">previous post</a> has been dropped. It will be absent from the next Beta, where the following replaces it.</p>
<h3>Handling a local project</h3>
<p>Consistently with what was done with local switches, it was decided, where meaningful, to overload the <code><packages></code> arguments of the commands, allowing directory names instead, and meaning "all packages defined there", with some side-effects.</p>
<p>For example, the following command is now allowed, and I believe it will be extra convenient to many:</p>
<pre><code class="language-shell-session">opam install . --deps-only
</code></pre>
<p>What this does is find <code>opam</code> (or <code><pkgname>.opam</code>) files in the current directory (<code>.</code>), resolve their installations, and install all required packages. That should be the single step before running the source build by hand.</p>
<p>The following is a little bit more complex:</p>
<pre><code class="language-shell-session">opam install .
</code></pre>
<p>This also retrieves the packages defined at <code>.</code>, <strong>pins them</strong> to the current source (using version-control if present), and installs them. Note that subsequent runs actually synchronise the pinnings, so that packages removed or renamed in the source tree are tracked properly (<em>i.e.</em> removed ones are unpinned, new ones pinned, the other ones upgraded as necessary).</p>
<p><code>opam upgrade</code>, <code>opam reinstall</code>, and <code>opam remove</code> have also been updated to handle directories as arguments, and will work on "all packages pinned to that target", <em>i.e.</em> the packages pinned by the previous call to <code>opam install <dir></code>. In addition, <code>opam remove <dir></code> unpins the packages, consistently reverting the converse <code>install</code> operation.</p>
<p><code>opam show</code> already had a <code>--file</code> option, but has also been extended in the same way, for consistency and convenience.</p>
<p>This all, of course, works well with a local switch at <code>./</code>, but the two features can be used completely independently. Note also that the directory name must be made unambiguous with a possible package name, so make sure to use <code>./foo</code> rather than just <code>foo</code> for a local project in subdirectory <code>foo</code>.</p>
<h3>Specifying a destdir</h3>
<p>This relies on installed files tracking, but was actually independent from the other <code>opam build</code> features. It is now simply a new option to <code>opam install</code>:</p>
<pre><code class="language-shell-session">opam install foo --destdir ~/local/
</code></pre>
<p>will install <code>foo</code> normally (if it isn't installed already) and copy all its installed files, following the same hierarchy, into <code>~/local</code>. <code>opam remove --destdir</code> is also supported, to remove these files.</p>
<h3>Initialising</h3>
<p>Automatic initialisation has been dropped for the moment. It was only saving one command (<code>opam init</code>, that opam will kindly print out for you if you forget it), and had two drawbacks:</p>
<ul>
<li>some important details (like shell setup for opam) were skipped
</li>
<li>the initialisation options were much reduced, so you would often have to go back to <code>opam init</code> anyway. The other possibility being to duplicate <code>init</code> options to all commands, adding lots of noise. Keeping things separate has its merits.
</li>
</ul>
<p>Granted, another command, <code>opam switch create .</code>, was made implicit. But using a local switch is a user choice, and worse, in contradiction with the previous de facto opam default, so not creating one automatically seems safer: having to specify <code>--no-autoinit</code> to <code>opam build</code> in order to get the more simple behaviour was inconvenient and error-prone.</p>
<p>One thing is provided to help with initialisation, though: <code>opam switch create <dir></code> has been improved to handle package definitions at <code><dir></code>, and will use them to choose a compatible compiler, as <code>opam build</code> did. This avoids the frustration of creating a switch, then finding out that the package wasn't compatible with the chosen compiler version, and having to start over with an explicit choice of a different compiler.</p>
<p>If you would really like automatic initialisation, and have a better interface to propose, your feedback is welcome!</p>
<h3>More related options</h3>
<p>A few other new options have been added to <code>opam install</code> and related commands, to improve the project-local workflows:</p>
<ul>
<li><code>opam install --keep-build-dir</code> is now complemented with <code>--reuse-build-dir</code>, for incremental builds within opam (assuming your build-system supports it correctly). At the moment, you should specify both on every upgrade of the concerned packages, or you could set the <code>OPAMKEEPBUILDDIR</code> and <code>OPAMREUSEBUILDDIR</code> environment variables.
</li>
<li><code>opam install --inplace-build</code> runs the scripts directly within the source dir instead of a dedicated copy. If multiple packages are pinned to the same directory, this disables parallel builds of these packages.
</li>
<li><code>opam install --working-dir</code> uses the working directory state of your project, instead of the state registered in the version control system. Don't worry, opam will warn you if you have uncommitted changes and forgot to specify <code>--working-dir</code>.
</li>
</ul>
<blockquote>
<p>NOTE: this article is cross-posted on <a href="https://opam.ocaml.org/blog/">opam.ocaml.org</a> and <a href="/blog">ocamlpro.com</a>.</p>
</blockquote>
<h1>Comments</h1>
<p>Hez Carty (4 May 2017 at 21 h 30 min):</p>
<blockquote>
<p>Would a command like “opam init $DIR” and “opam init $DIR –deps-only” work for an auto-intialization interface? Ideally creating the equivalent to a bare .opam/ using $DIR as $OPAMROOT + install a local switch + “opam install .” (with –deps-only if specified) under the newly created switch.</p>
</blockquote>
<p>Louis Gesbert (5 May 2017 at 7 h 50 min):</p>
<blockquote>
<p><code>opam init DIR</code> is currently used and means “use DIR as your initial, default package repository”.
Overloading <code>opam init</code> sounds like a good approach though, esp. since the default of the command is already to create an initial switch. But a new flag, e.g. <code>opam init –here</code>, could be used to mean: do <code>opam init –bare</code> (it’s idempotent), <code>opam switch create .</code> and then <code>opam install .</code>.</p>
<p>The issue that remains is inherent to compound commands: we would have to port e.g. the <code>–deps-only</code> option to <code>opam init</code>, making the interface and doc heavier, and it would only make sense in this specific use-case ; either that or limit the expressivity of the compound command, requiring people to fallback to the individual ones when they need some more specific features.</p>
</blockquote>
new opam features: local switcheshttps://ocamlpro.com/blog/2017_04_27_new_opam_features_local_switches2017-04-27T08:12:13Z2017-04-27T08:12:13Z
Louis Gesbert
Among the areas we wanted to improve on for opam 2.0 was the handling of switches. In opam 1.2, they are simply accessed by a name (the OCaml version by default), and are always stored into ~/.opam/<name>. This is fine, but can get a bit cumbersome when many switches are in presence, as there is no ...<p>Among the areas we wanted to improve on for opam 2.0 was the handling of
<em>switches</em>. In opam 1.2, they are simply accessed by a name (the OCaml version
by default), and are always stored into <code>~/.opam/<name></code>. This is fine, but can
get a bit cumbersome when many switches are in presence, as there is no way to
sort them or associate them with a given project.</p>
<blockquote>
<h3>A reminder about <em>switches</em></h3>
<p>For those unfamiliar with it, switches, in opam, are independent prefixes with
their own compiler and set of installed packages. The <code>opam switch</code> command
allows to create and remove switches, as well as select the currently active
one, where operations like <code>opam install</code> will operate.</p>
<p>Their uses include easily juggling between versions of OCaml, or of a library,
having incompatible packages installed separately but at the same time, running
tests without damaging your "main" environment, and, quite often, separation of
environment for working on different projects.</p>
<p>You can also select a specific switch for a single command, with</p>
<pre><code>opam install foo --switch other
</code></pre>
<p>or even for a single shell session, with</p>
<pre><code>eval $(opam env --switch other)
</code></pre>
</blockquote>
<p>What opam 2.0 adds to this is the possibility to create so-called <em>local
switches</em>, stored below a directory of your choice. This gets users back in
control of how switches are organised, and wiping the directory is a safe way to
get rid of the switch.</p>
<h3>Using within projects</h3>
<p>This is the main intended use: the user can define a switch within the source of
a project, for use specifically in that project. One nice side-effect to help
with this is that, if a "local switch" is detected in the current directory or a
parent, opam will select it automatically. Just don't forget to run <code>eval $(opam env)</code> to make the environment up-to-date before running <code>make</code>.</p>
<h3>Interface</h3>
<p>The interface simply overloads the <code>switch-name</code> arguments, wherever they were
present, allowing directory names instead. So for example:</p>
<pre><code class="language-shell-session">cd ~/src/project
opam switch create ./
</code></pre>
<p>will create a local switch in the directory <code>~/src/project</code>. Then, it is for
example equivalent to run <code>opam list</code> from that directory, or <code>opam list --switch=~/src/project</code> from anywhere.</p>
<p>Note that you can bypass the automatic local-switch selection if needed by using
the <code>--switch</code> argument, by defining the variable <code>OPAMSWITCH</code> or by using <code>eval $(opam env --switch <name>)</code></p>
<h3>Implementation</h3>
<p>In practice, the switch contents are placed in a <code>_opam/</code> subdirectory. So if
you create the switch <code>~/src/project</code>, you can browse its contents at
<code>~/src/project/_opam</code>. This is the direct prefix for the switch, so e.g.
binaries can be found directly at <code>_opam/bin/</code>: easier than searching the opam
root! The opam metadata is placed below that directory, in a <code>.opam-switch/</code>
subdirectory.</p>
<p>Local switches still share the opam root, and in particular depend on the
repositories defined and cached there. It is now possible, however, to select
different repositories for different switches, but that is a subject for another
post.</p>
<p>Finally, note that removing that <code>_opam</code> directory is handled transparently by
opam, and that if you want to share a local switch between projects, symlinking
the <code>_opam</code> directory is allowed.</p>
<h3>Current status</h3>
<p>This feature has been present in our dev builds for a while, and you can already
use it in the
<a href="https://github.com/ocaml/opam/releases/tag/2.0.0-beta2">current beta</a>.</p>
<h3>Limitations and future extensions</h3>
<p>It is not, at the moment, possible to move a local switch directory around,
mainly due to issues related to relocating the OCaml compiler.</p>
<p>Creating a new switch still implies to recompile all the packages, and even the
compiler itself (unless you rely on a system installation). The projected
solution is to add a build cache, avoiding the need to recompile the same
package with the same dependencies. This should actually be possible with the
current opam 2.0 code, by leveraging the new hooks that are made available. Note
that relocation of OCaml is also an issue for this, though.</p>
<p>Editing tools like <code>ocp-indent</code> or <code>merlin</code> can also become an annoyance with
the multiplication of switches, because they are not automatically found if not
installed in the current switch. But the <code>user-setup</code> plugin (run <code>opam user-setup install</code>) already handles this well, and will access <code>ocp-indent</code> or
<code>tuareg</code> from their initial switch, if not found in the current one. You will
still need to install tools that are tightly bound to a compiler version, like
<code>merlin</code> and <code>ocp-index</code>, in the switches where you need them, though.</p>
<blockquote>
<p>NOTE: this article is cross-posted on
<a href="https://opam.ocaml.org/blog/">opam.ocaml.org</a> and
<a href="/blog">ocamlpro.com</a>.</p>
</blockquote>
<h1>Comments</h1>
<p>Jeremie Dimino (11 May 2017 at 8 h 27 min):</p>
<blockquote>
<p>Thanks, that seems like a useful feature. Regarding relocation of the compiler, shouldn’t it be enough to set the environment variable OCAMLLIB? AFAIK the stdlib directory is the only hardcoded path on the compiler.</p>
</blockquote>
<p>Louis Gesbert (11 May 2017 at 8 h 56 min):</p>
<blockquote>
<p>Last I checked, there were a few more problematic points, in particular generated bytecode executables statically referring to their interpreter; but yes, in any case, it’s worth experimenting in that direction using the new hooks, to see how it works in practice.</p>
</blockquote>
<p>Jeremie Dimino (12 May 2017 at 9 h 13 min):</p>
<blockquote>
<p>Indeed, I remember that we had a similar problem in the initial setup to test the public release of Jane Street packages: we were using long paths for the opam roots and the generated #! where too long for the OS… What I did back then is write a program that scanned the tree and rewrote the #! to use “#!/usr/bin/env ocamlrun”.</p>
<p>That could be an option here. The rewriting only need to be done once, since the compiler uses <code>ocamlc -where</code>/camlheader when generating a bytecode executable.</p>
</blockquote>
EzSudokuhttps://ocamlpro.com/blog/2017_04_01_ezsudoku2017-04-01T08:12:13Z2017-04-01T08:12:13Z
chambart
As you may have noticed, on the begining of April I have some urge to write something technical about some deeply specific point of OCaml. This time I'd like to tackle that through sudoku. It appeard that Sudoku is of great importance considering the number of posts explaining how to write a solver....<p>As you may have noticed, on the begining of April I have some urge to write something technical about some deeply specific point of OCaml. This time I'd like to tackle that through sudoku.</p>
<p>It appeard that Sudoku is of great importance considering the number of posts explaining how to write a solver. Following that trend I will explain how to write one in OCaml. But with a twist.</p>
<p>We will try to optimize it. I won't show you anything as obvious as how to micro-optimize your code or some smart heuristc. No we are not aiming for being merely algorithmically good. We will try to make something serious, we are want it to be solved even before the program starts.</p>
<p>Yes really. Before. And I will show you how to use a feature of OCaml 4.03 that is sadly not well known.</p>
<hr />
<p>First of all, as we do like type and safe programs, we will define what a well formed sudoku solution looks like. And by defining of course I mean declaring some GADTs with enough constraints to ensure that only well correct solutions are valid.</p>
<p>I assume tha you know the rules of Sudoku and will refrain from infuriating you by explaining it. But we will still need some vocabulary.</p>
<p>So the aim of sudoku is to fill a 'grid' with 'symbols' satisfying some 'row' 'column' and 'square' constraints.</p>
<p>To make the code examples readable we will stick to <code>4*4</code> sudokus. It's the smallest size that behaves the same way as <code>9*9</code> ones (I considered going for <code>1*1</code> ones, but the article ended up being a bit short). Of course everything would still apply to any <code>n^2*n^2</code> sized one.</p>
<p>So let's start digging in some types. As we will refine them along the way, I will leave some parts to be filled later. This is represented by '...' .</p>
<p>First there are symbols, just 4 of them befause we reduced the size. Nothing special about that right now.</p>
<pre><code class="language-ocaml">type ... symbol =
| A : ...
| B : ...
| C : ...
| D : ...
</code></pre>
<p>And a grid is 16 symbols. To avoid too much visual clutter in the type I just put them linearly. The comment show how it is supposed to be seen in the 2d representation of the grid:</p>
<pre><code class="language-ocaml">(* a b c d
e f g h
i j k l
m n o p *)
type grid =
Grid :
... symbol * (* a *)
... symbol * (* b *)
... symbol * (* c *)
... symbol * (* d *)
... symbol * (* e *)
... symbol * (* f *)
... symbol * (* g *)
... symbol * (* h *)
... symbol * (* i *)
... symbol * (* j *)
... symbol * (* k *)
... symbol * (* l *)
... symbol * (* m *)
... symbol * (* n *)
... symbol * (* o *)
... symbol (* p *)
-> solution
</code></pre>
<p>Right now grid is a simple 16-uple of symbols, but we will soon start filling those '...' to forbid any set of symbols that is not a valid solution.</p>
<p>Each constraint looks like, 'among those 4 positions neither 2 symbols are the same'. To express that (in fact something equivalent but a bit simpler to state with our types), we will need to name positions. So let's introduce some names:</p>
<pre><code class="language-ocaml">type r1 (* the first position among a row constraint *)
type r2 (* the second position among a row constraint *)
type r3
type r4
type c1 (* the first position among a column constraint *)
type c2
type c3
type c4
type s1
type s2
type s3
type s4
type ('row, 'column, 'square) position
</code></pre>
<p>On the 2d grid this is how the various positions will be mapped.</p>
<pre><code>r1 r2 r3 r4
r1 r2 r3 r4
r1 r2 r3 r4
r1 r2 r3 r4
c1 c1 c1 c1
c2 c2 c2 c2
c3 c3 c3 c3
c4 c4 c4 c4
s1 s2 s1 s2
s3 s4 s3 s4
s1 s2 s1 s2
s3 s4 s4 s4
</code></pre>
<p>For instance, the position g, in the 2nd row, 3rd column, will at the 3rd position in its row constraint, 2nd in its column constraint, and 3rd in its square constraint:</p>
<pre><code class="language-ocaml">type g = (r3, c2, s3) position
</code></pre>
<p>We could have declare a single constraint position type, but this is slightly more readable. than:</p>
<pre><code class="language-ocaml">type g = (p3, p2, p3) position
</code></pre>
<p>The position type is phantom, we could have provided a representation, but since no value of this type will ever be created, it's less confusing to state it that way.</p>
<pre><code class="language-ocaml">type a = (r1, c1, s1) position
type b = (r2, c1, s2) position
type c = (r3, c1, s1) position
type d = (r4, c1, s2) position
type e = (r1, c2, s3) position
type f = (r2, c2, s4) position
type g = (r3, c2, s3) position
type h = (r4, c2, s4) position
type i = (r1, c3, s1) position
type j = (r2, c3, s2) position
type k = (r3, c3, s1) position
type r = (r4, c3, s2) position
type m = (r1, c4, s3) position
type n = (r2, c4, s4) position
type o = (r3, c4, s3) position
type p = (r4, c4, s4) position
</code></pre>
<p>It is now possible to state for each symbol in which position it is, so we will start filling a bit those types.</p>
<pre><code class="language-ocaml">type ('position, ...) symbol =
| A : (('r, 'c, 's) position, ...) symbol
| B : (('r, 'c, 's) position, ...) symbol
| C : (('r, 'c, 's) position, ...) symbol
| D : (('r, 'c, 's) position, ...) symbol
</code></pre>
<p>This means that a symbol value is then associated to a single position in each constraint. We will need to state that in the grid type too:</p>
<pre><code class="language-ocaml">type grid =
Grid :
(a, ...) symbol * (* a *)
(b, ...) symbol * (* b *)
(c, ...) symbol * (* c *)
(d, ...) symbol * (* d *)
(e, ...) symbol * (* e *)
(f, ...) symbol * (* f *)
(g, ...) symbol * (* g *)
(h, ...) symbol * (* h *)
(i, ...) symbol * (* i *)
(j, ...) symbol * (* j *)
(k, ...) symbol * (* k *)
(l, ...) symbol * (* l *)
(m, ...) symbol * (* m *)
(n, ...) symbol * (* n *)
(o, ...) symbol * (* o *)
(p, ...) symbol (* p *)
-> solution
</code></pre>
<p>We just need to forbid a symbol to appear in two different positions of a given row/column/square to prevent invalid solutions.</p>
<pre><code class="language-ocaml">type 'fields row constraint 'fields = < a : 'a; b : 'b; c : 'c; d : 'd >
type 'fields column constraint 'fields = < a : 'a; b : 'b; c : 'c; d : 'd >
type 'fields square constraint 'fields = < a : 'a; b : 'b; c : 'c; d : 'd >
</code></pre>
<p>Those types represent the statement 'in this line/column/square, the symbol a is at the position 'a, the symbol b is at the position 'b, ...'</p>
<p>For instance, the row 'A D B C' will be represented by</p>
<pre><code class="language-ocaml">< a : l1; b : l3; c : l4; d : l2 > row
</code></pre>
<p>Which reads: 'The symbol A is in first position, B in third position, C in fourth, and D in second'</p>
<p>The object type is used to make things a bit lighter later and allow to state names.</p>
<p>Now the symbols can be a bit more annotated:</p>
<pre><code class="language-ocaml">type ('position, 'row, 'column, 'square) symbol =
| A : (('r, 'c, 's) position,
< a : 'r; .. > row,
< a : 'c; .. > column,
< a : 's; .. > square)
symbol
| B : (('r, 'c, 's) position,
< b : 'r; .. > row,
< b : 'c; .. > column,
< b : 's; .. > square)
symbol
| C : (('r, 'c, 's) position,
< c : 'r; .. > row,
< c : 'c; .. > column,
< c : 's; .. > square)
symbol
| D : (('r, 'c, 's) position,
< d : 'r; .. > row,
< d : 'c; .. > column,
< d : 's; .. > square)
symbol
</code></pre>
<p>Notice that '..' is not '...'. Those dots are really part of the OCaml syntax: it means 'put whatever you want here, I don't care'. There is nothing more to add to this type.</p>
<p>This type declaration reports the position information. Using the same variable name 'r in the position and in the row constraint parameter for instance means that both fields will have the same type.</p>
<p>For instance, a symbol 'B' in position 'g' would be in the 3rd position of its row, 2nd position of its column , and 3rd position of its square:</p>
<pre><code class="language-ocaml">let v : (g, _, _, _) symbol = B;;
val v :
(g, < b : r3 > row,
< b : c2 > column,
< b : s3 > square)
symbol = B
</code></pre>
<p>Those types constraints ensure that this is correctly reported.</p>
<p>The real output of the type checker is a bit more verbose, but I remove the irrelevant part:</p>
<pre><code class="language-ocaml">val v :
(g, < a : 'a; b : r3; c : 'b; d : 'c > row,
< a : 'd; b : c2; c : 'e; d : 'f > column,
< a : 'g; b : s3; c : 'h; d : 'i > square)
symbol = B
</code></pre>
<p>We are now quite close from a completely constrained type. We just need to say that the various symbols from the same row/line/column constraint have the same type:</p>
<pre><code class="language-ocaml">type grid =
Grid :
(a, 'row1, 'column1, 'square1) symbol *
(b, 'row1, 'column2, 'square1) symbol *
(c, 'row1, 'column3, 'square2) symbol *
(d, 'row1, 'column4, 'square2) symbol *
(e, 'row2, 'column1, 'square1) symbol *
(f, 'row2, 'column2, 'square1) symbol *
(g, 'row2, 'column3, 'square2) symbol *
(h, 'row2, 'column4, 'square2) symbol *
(i, 'row3, 'column1, 'square3) symbol *
(j, 'row3, 'column2, 'square3) symbol *
(k, 'row3, 'column3, 'square4) symbol *
(l, 'row3, 'column4, 'square4) symbol *
(m, 'row4, 'column1, 'square3) symbol *
(n, 'row4, 'column2, 'square3) symbol *
(o, 'row4, 'column3, 'square4) symbol *
(p, 'row4, 'column4, 'square4) symbol *
</code></pre>
<p>That is two symbols in the same row/column/square will share the same 'row/'symbol/'square type. For any couple of symbols in say, a row, they must agree on that type, hence, on the position of every symbol.</p>
<p>Let's look at the 'A' symbol for the 'a' and 'c' position for instance. Both share the same 'row1 type variable. There are two cases. Either both are 'A's ore one is not.</p>
<ul>
<li>If one symbol is not a 'A', let's say those are 'C' and 'A' symbols. Their row type (pun almost intended) will be respectively <code>< c : r1; .. ></code> and <code>< a : r3; .. ></code>. Meaning that 'C' does not care about the position of 'A' and conversly. Those types are compatible. No problem here.
</li>
<li>If both are 'A's then something else happens. Their row types will be <code>< a : r1; .. ></code> and <code>< a : r3; .. ></code> which is certainly not compatible since r1 and r3 are not compatible. This will be rejected.
Now we have a grid type that checks the sudoku constraints !
</li>
</ul>
<p>Let's try it.</p>
<pre><code class="language-ocaml">let ok =
Grid
(A, B, C, D,
C, D, A, B,
D, A, B, C,
B, C, D, A)
val ok : grid = Grid (A, B, C, D, C, D, A, B, D, A, B, C, B, C, D, A)
let not_ok =
Grid
(A, B, C, D,
C, D, A, B,
D, A, B, C,
B, C, A, D)
B, C, A, D);;
^
Error: This expression has type
(o, < a : r3; b : r1; c : r2; d : 'a > row,
< a : c4; b : 'b; c : 'c; d : 'd > column,
< a : s3; b : 'e; c : 'f; d : 'g > square)
symbol
but an expression was expected of type
(o, < a : r3; b : r1; c : r2; d : 'a > row,
< a : c2; b : c3; c : c1; d : 'h > column,
< a : 'i; b : s1; c : s2; d : 'j > square)
symbol
Types for method a are incompatible
</code></pre>
<p>What it is trying to say is that 'A' is both at position '2' and '4' of its column. Well it seems to work.</p>
<h2>Solving it</h2>
<p>But we are not only interested in checking that a solution is correct, we want to find them !</p>
<p>But with 'one weird trick' we will magically transform it into a solver, namely the <code>-> .</code> syntax. It was introduced in OCaml 4.03 for some other <a href="https://ocaml.org/manual/gadts.html#p:gadt-refutation-cases">purpose</a>. But we will now use its hidden power !</p>
<p>This is the right hand side of a pattern. It explicitely states that a pattern is unreachable. For instance</p>
<pre><code class="language-ocaml">type _ t =
| Int : int -> int t
| Float : float -> float t
let add (type v) (a : v t) (b : v t) : v t =
match a, b with
| Int a, Int b -> Int (a + b)
| Float a, Float b -> Float (a +. b)
| _ -> .
</code></pre>
<p>By writing it here you state that you don't expect any other pattern to verify the type constraints. This is effectively the case here. In general you won't need this as the exhaustivity checker will see it. But in some intricate situations it will need some hints to work a bit more. For more information see <a href="http://export.arxiv.org/abs/1702.02281">Jacques Garrigue / Le Normand article</a></p>
<p>This may be a bit obscure, but this is what we now need. Indeed, we can ask the exhaustivity checker if there exist a value verifying the pattern and the type constraints. For instance to solve a problem, we ask the compiler to check if there is any value verifying a partial solution encoded as a pattern.</p>
<pre><code> A _ C _
_ D _ B
_ A D _
D _ B _
</code></pre>
<pre><code class="language-ocaml">let test x =
match x with
| Grid
(A, _, C, _,
_, D, _, B,
_, A, D, _,
D, _, B, _) -> .
| _ -> ()
Error: This match case could not be refuted.
Here is an example of a value that would reach it:
Grid (A, B, C, D, C, D, A, B, B, A, D, C, D, C, B, A)
</code></pre>
<p>The checker tells us that there is a solution verifying those constraints, and provides it.</p>
<p>If there were no solution, there would have been no error.</p>
<pre><code class="language-ocaml">let test x =
match x with
| Grid
(A, B, C, _,
_, _, _, D,
_, _, _, _,
_, _, _, _) -> .
| _ -> ()
val test : grid -> unit =
</code></pre>
<p>And that's it !</p>
<h2>Wrapping it up</h2>
<p>Of course that's a bit cheating since the program is not executable, but who cares really ?
If you want to use it, I made a small (ugly) <a href="https://gist.github.com/chambart/15b18770d2368cc703a32f18fe12d179">script</a> generating those types. You can try it on bigger problems, but in fact it is a bit exponential. So you shouldn't really expect an answer too soon.</p>
<h1>Comments</h1>
<p>Louis Gesbert (28 April 2017 at 8 h 11 min):</p>
<blockquote>
<p>Brilliant!</p>
</blockquote>
new opam features: "opam build"https://ocamlpro.com/blog/2017_03_16_new_opam_features_opam_build2017-03-16T08:12:13Z2017-03-16T08:12:13Z
Louis Gesbert
UPDATE: after discussions following this post, this feature was abandoned with the interface presented below. See this post for the details and the new interface! The new opam 2.0 release, currently in beta, introduces several new features. This post gets into some detail on the new opam build comma...<blockquote>
<p>UPDATE: after discussions following this post, this feature was abandoned with
the interface presented below. See <a href="/blog/2017_05_04_new_opam_features_opam_install_dir">this post</a> for
the details and the new interface!</p>
</blockquote>
<p>The new opam 2.0 release, currently in beta, introduces several new features.
This post gets into some detail on the new <code>opam build</code> command, its purpose,
its use, and some implementation aspects.</p>
<p><strong><code>opam build</code> is run from the source tree of a project, and does not rely on a
pre-existing opam installation.</strong> As such, it adds a new option besides the
existing workflows based on managing shared OCaml installations in the form of
switches.</p>
<h3>What does it do ?</h3>
<p>Typically, this is used in a fresh git clone of some OCaml project. Like when
pinning the package, opam will find and leverage package definitions found in
the source, in the form of <code>opam</code> files.</p>
<ul>
<li>if opam hasn't been initialised (no <code>~/.opam</code>), this is taken care of.
</li>
<li>if no switch is otherwise explicitely selected, a <em>local switch</em> is used, and
created if necessary (<em>i.e.</em> in <code>./_opam/</code>)
</li>
<li>the metadata for the current project is registered, and the package installed
after its dependencies, as opam usually does
</li>
</ul>
<p>This is particularly useful for <strong>distributing projects</strong> to people not used to
opam and the OCaml ecosystem: the setup steps are automatically taken care of,
and a single <code>opam build</code> invocation can take care of resolving the dependency
chains for your package.</p>
<p>If building the project directly is preferred, adding <code>--deps-only</code> is a good
way to get the dependencies ready for the project:</p>
<pre><code class="language-shell-session">opam build --deps-only
eval $(opam config env)
./configure; make; etc.
</code></pre>
<p>Note that if you just want to handle project-local opam files, <code>opam build</code> can
also be used in your existing switches: just specify <code>--no-autoinit</code>, <code>--switch</code>
or make sure the <code>OPAMSWITCH</code> variable is set. <em>E.g.</em> <code>opam build --no-autoinit --deps-only</code> is a convenient way to get the dependencies for the local project
ready in your current switch.</p>
<h3>Additional functions</h3>
<h4>Installation</h4>
<p>The installation of the packages happens as usual to the prefix corresponding to
the switch used (<code><project-root>/_opam/</code> for a local switch). But it is
possible, with <code>--install-prefix</code>, to further install the package to the system:</p>
<pre><code class="language-shell-session">opam build --install-prefix ~/local
</code></pre>
<p>will install the results of the package found in the current directory below
~/local.</p>
<p>The dependencies of the package won't be installed, so this is intended for
programs, assuming they are relocatable, and not for libraries.</p>
<h4>Choosing custom repositories</h4>
<p>The user can pre-select the repositories to use on the creation of the local
switch with:</p>
<pre><code class="language-shell-session">opam build --repositories <repos>
</code></pre>
<p>where <code><repos></code> is a comma-separated list of repositories, specified either as
<code>name=URL</code>, or <code>name</code> if already configured on the system.</p>
<h4>Multiple packages</h4>
<p>Multiple packages are commonly found to share a single repository. In this case,
<code>opam build</code> registers and builds all of them, respecting cross-dependencies.
The opam files to use can also be explicitely selected on the command-line.</p>
<p>In this case, specific opam files must be named <code><package-name>.opam</code>.</p>
<h3>Implementation details</h3>
<p>The choice of the compiler, on automatic initialisation, is either explicit,
using the <code>--compiler</code> option, or automatic. In the latter case, the default
selection is used (see <code>opam init --help</code>, section "CONFIGURATION FILE" for
details), but a compiler compatible with the local packages found is searched
from that. This allows, for example, to choose a system compiler when available
and compatible, avoiding a recompilation of OCaml.</p>
<p>When using <code>--install-prefix</code>, the normal installation is done, then the
tracking of package-installed files, introduced in opam 2.0, is used to extract
the installed files from the switch and copy them to the prefix.</p>
<p>The packages installed through <code>opam build</code> are not registered in any
repository, and this is not an implicit use of <code>opam pin</code>: the rationale is that
packages installed this way will also be updated by repeating <code>opam build</code>. This
means that when using other commands, <em>e.g.</em> <code>opam upgrade</code>, opam won't try to
keep the packages to their local, source version, and will either revert them to
their repository definition, or remove them, if they need recompilation.</p>
<h3>Planned extensions</h3>
<p>This is still in beta: there are still rough edges, please experiment and give
feedback! It is still possible that the command syntax and semantics change
significantly before release.</p>
<p>Another use-case that we are striving to improve is sharing of development
setups (share sets of pinned packages, depend on specific remotes or git hashes,
etc.). We have <a href="https://github.com/ocaml/opam/issues/2762">many</a>
<a href="https://github.com/ocaml/opam/issues/2495">ideas</a> to
<a href="https://github.com/ocaml/opam/issues/1734">improve</a> on this, but <code>opam build</code>
is not, as of today, a direct solution to this. In particular, installing this
way still relies on the default opam repository; a way to define specific
options for the switch that is implicitely created on <code>opam build</code> is in the
works.</p>
<blockquote>
<p>NOTE: this article is cross-posted on <a href="https://opam.ocaml.org/blog/">opam.ocaml.org</a> and <a href="/blog">ocamlpro.com</a>.</p>
</blockquote>
<h1>Comments</h1>
<p>Louis Gesbert (16 March 2017 at 14 h 31 min):</p>
<blockquote>
<p>Some discussion on a better naming and making some parts of this more widely available in the opam CLI is ongoing at https://github.com/ocaml/opam/issues/2882</p>
</blockquote>
<p>Hez Carty (16 March 2017 at 17 h 23 min):</p>
<blockquote>
<p>Is it possible/planned to support sharing of compilers across local (or global) switches? It would be very useful to have a global 4.04.0+flambda switch including only the compiler itself or the compiler + basic tools like ocp-indent and merlin. Then a number of projects could share this base installation but have their own locally installed dependencies without duplicating the entire build time per-project.</p>
</blockquote>
<p>Louis Gesbert (17 March 2017 at 10 h 10 min):</p>
<blockquote>
<p>Sharing compilers, or other packages across switches is not supported at the moment. However:</p>
<p>You can still use the global <code>system compiler</code> on any switch, local or not, to avoid its recompilation
What is planned, as a first step, for after the 2.0 release, is to add a cache of compiled packages. Hooks are already in place to allow this, and opam is able to track the files installed by each package already, so the most difficult part is probably going to be the relocation issues with OCaml itself.</p>
<p>A cache is an easier solution to warrant consistency: with shared switches, the problem of reinstallations and keeping everything consistent gets much more complex — what happens when you change the compiler of your “master” switch ?</p>
</blockquote>
<p>Hez Carty (20 March 2017 at 16 h 46 min):</p>
<blockquote>
<p>That sounds great, thank you. Should make this kind of local switch more useful when working with large numbers of projects.</p>
</blockquote>
opam 2.0 Beta is out!https://ocamlpro.com/blog/2017_02_09_opam_2.0_beta_is_out2017-02-09T08:12:13Z2017-02-09T08:12:13Z
Louis Gesbert
UPDATE (2017-02-14): A beta2 is online, which fixes issues and performance of the opam build command. Get the new binaries, or recompile the opam-devel package and replace the previous binary. We are pleased to announce that the beta release of opam 2.0 is now live! You can try it already, bootstrap...<blockquote>
<p>UPDATE (2017-02-14): A beta2 is online, which fixes issues and performance of
the <code>opam build</code> command. Get the new
<a href="https://github.com/ocaml/opam/releases/tag/2.0.0-beta2">binaries</a>, or
recompile the <a href="http://opam.ocaml.org/packages/opam-devel/">opam-devel</a> package
and replace the previous binary.</p>
</blockquote>
<p>We are pleased to announce that the beta release of opam 2.0 is now live! You
can try it already, bootstrapping from a working 1.2 opam installation, with:</p>
<pre><code class="language-shell-session">opam update; opam install opam-devel
</code></pre>
<p>With about a thousand patches since the last stable release, we took the time to
gather feedback after <a href="../opam-2-0-preview">our last announcement</a> and
implemented a couple of additional, most-wanted features:</p>
<ul>
<li>An <code>opam build</code> command that, from the root of a source tree containing one
or more package definitions, can automatically handle initialisation and
building of the sources in a local switch.
</li>
<li>Support for
<a href="https://github.com/hannesm/conex-paper/raw/master/paper.pdf">repository signing</a>
through the external <a href="https://github.com/hannesm/conex">Conex</a> tool, being
developed in parallel.
</li>
</ul>
<p>There are many more features, like the new <code>opam clean</code> and <code>opam admin</code>
commands, a new archive caching system, etc., but we'll let you check the full
<a href="https://github.com/ocaml/opam/blob/2.0.0-beta/CHANGES">changelog</a>.</p>
<p>We also improved still on the
<a href="../opam-2-0-preview/#Afewhighlights">already announced features</a>, including
compilers as packages, local switches, per-switch repository configuration,
package file tracking, etc.</p>
<p>The updated documentation is at http://opam.ocaml.org/doc/2.0/. If you are
developing in opam-related tools, you may also want to browse the
<a href="https://opam.ocaml.org/doc/2.0/api/index.html">new APIs</a>.</p>
<h2>Try it out</h2>
<p>Please try out the beta, and report any issues or missing features. You can:</p>
<ul>
<li>Build it from source in opam, as shown above (<code>opam install opam-devel</code>)
</li>
<li>Use the <a href="https://github.com/ocaml/opam/releases/tag/2.0.0-beta">pre-built binaries</a>.
</li>
<li>Building from the source tarball:
<a href="https://github.com/ocaml/opam/releases/download/2.0.0-beta/opam-full-2.0.0-beta.tar.gz">download here</a>
and build using <code>./configure && make lib-ext && make</code> if you have OCaml >=
4.01 already available; <code>make cold</code> otherwise
</li>
<li>Or directly from the
<a href="https://github.com/ocaml/opam/tree/2.0.0-beta">git tree</a>, following the
instructions included in the README. Some files have been moved around, so if
your build fails after you updated an existing git clone, try to clean it up
(<code>git clean -dx</code>).
</li>
</ul>
<p>Some users have been using the alpha for the past months without problems, but
you may want to keep your opam 1.2 installation intact until the release is out.
An easy way to do this is with an alias:</p>
<pre><code class="language-shell-session">alias opam2="OPAMROOT=~/.opam2 path/to/opam-2-binary"
</code></pre>
<h2>Changes to be aware of</h2>
<h3>Command-line interface</h3>
<ul>
<li><code>opam switch create</code> is now needed to create new switches, and <code>opam switch</code>
is now much more expressive
</li>
<li><code>opam list</code> is also much more expressive, but be aware that the output may
have changed if you used it in scripts
</li>
<li>new commands:
<ul>
<li><code>opam build</code>: setup and build a local source tree
</li>
<li><code>opam clean</code>: various cleanup operations (wiping caches, etc.)
</li>
<li><code>opam admin</code>: manage software repositories, including upgrading them to
opam 2.0 format (replaces the <code>opam-admin</code> tool)
</li>
<li><code>opam env</code>, <code>opam exec</code>, <code>opam var</code>: shortcuts for the <code>opam config</code> subcommands
</li>
</ul>
</li>
<li><code>opam repository add</code> will now setup the new repository for the current switch
only, unless you specify <code>--all</code>
</li>
<li>Some flags, like <code>--test</code>, now apply to the packages listed on the
command-line only. For example, <code>opam install lwt --test</code> will build and
install lwt and all its dependencies, but only build/run the tests of the
<code>lwt</code> package. Test-dependencies of its dependencies are also ignored
</li>
<li>The new <code>opam install --soft-request</code> is useful for batch runs, it will
maximise the installed packages among the requested ones, but won't fail if
all can't be installed
</li>
</ul>
<p>As before, opam is self-documenting, so be sure to check <code>opam COMMAND --help</code>
first when in doubt. The bash completion scripts have also been thoroughly
improved, and may help navigating the new options.</p>
<h3>Metadata</h3>
<p>There are both a few changes (extensions, mostly) to the package description
format, and more drastic changes to the repository format, mainly related to
translating the old compiler definitions into packages.</p>
<ul>
<li>opam will automatically update, internally, definitions of pinned packages as
well as repositories in the 1.2 format
</li>
<li>however, it is faster to use repositories in the 2.0 format directly. To that
end, please use the <code>opam admin upgrade</code> command on your repositories. The
<code>--mirror</code> option will create a 2.0 mirror and put in place proper
redirections, allowing your original repository to retain the old format
</li>
</ul>
<p>The official opam repository at https://opam.ocaml.org remains in 1.2 format for
now, but has a live-updated 2.0 mirror to which you should be automatically
redirected. It cannot yet accept package definitions in 2.0 format.</p>
<h4>Package format</h4>
<ul>
<li>Any <code>available:</code> constraints based on the OCaml compiler version should be
rewritten into dependencies to the <code>ocaml</code> package
</li>
<li>Separate <code>build:</code> and <code>install:</code> instructions are now required
</li>
<li>It is now preferred to include the old <code>url</code> and <code>descr</code> files (containing the
archive URL and package description) in the <code>opam</code> file itself: (see the new
<a href="http://opam.ocaml.org/doc/2.0/Manual.html#opamfield-synopsis"><code>synopsis:</code></a>
and
<a href="http://opam.ocaml.org/doc/2.0/Manual.html#opamfield-description"><code>description:</code></a>
fields, and the
<a href="http://opam.ocaml.org/doc/2.0/Manual.html#opamsection-url">url {}</a> file
section)
</li>
<li>Building tests and documentation should now be part of the main <code>build:</code>
instructions, using the <code>{test}</code> and <code>{doc}</code> filters. The <code>build-test:</code> and
<code>build-doc:</code> fields are still supported.
</li>
<li>It is now possible to use opam variables within dependencies, for example
<code>depends: [ "foo" {= version} ]</code>, for a dependency to package <code>foo</code> at the
same version as the package being defined, or <code>depends: [ "bar" {os = "linux"} ]</code> for a dependency that only applies on Linux.
</li>
<li>The new <code>conflict-class:</code> field allows mutual conflicts among a set of
packages to be declared. Useful, for example, when there are many concurrent,
incompatible implementations.
</li>
<li>The <code>ocaml-version:</code> field has been deprecated for a long time and is no
longer accepted. This should now be a dependency on the <code>ocaml</code> package
</li>
<li>Three types of checksums are now accepted: you should use <code>md5=<hex-value></code>,
<code>sha256=<hex-value></code> or <code>sha512=<hex-value></code>. We'll be gradually deprecating
md5 in favour of the more secure algorithms; multiple checksums are allowed
</li>
<li>Patches supplied in the <code>patches:</code> field must apply with <code>patch -p1</code>
</li>
<li>The new <code>setenv:</code> field allows packages to export updates to environment
variables;
</li>
<li>Custom fields <code>x-foo:</code> can be used for extensions and external tools
</li>
<li><code>"""</code> delimiters allow unescaped strings
</li>
<li><code>&</code> has now the customary higher precedence than <code>|</code> in formulas
</li>
<li>Installed files are now automatically tracked meaning that the <code>remove:</code>
field is usually no longer required.
</li>
</ul>
<p>The full, up-to-date specification of the format can be browsed in the
<a href="http://opam.ocaml.org/doc/2.0/Manual.html#opam">manual</a>.</p>
<h4>Repository format</h4>
<p>In the official, default repository, and also when migrating repositories from
older format versions, there are:</p>
<ul>
<li>A virtual <code>ocaml</code> package, that depends on any implementation of the OCaml
compiler. This is what packages should depend on, and the version is the
corresponding base OCaml version (e.g. <code>4.04.0</code> for the <code>4.04.0+fp</code> compiler).
It also defines various configuration variables, see <code>opam config list ocaml</code>.
</li>
<li>Three mutually-exclusive packages providing actual implementations of the
OCaml toolchain:
<ul>
<li><code>ocaml-base-compiler</code> is the official releases
</li>
<li><code>ocaml-variants.<base-version>+<variant-name></code> contains all the other
variants
</li>
<li><code>ocaml-system-compiler</code> maps to a compiler installed on the system
outside of opam
</li>
</ul>
</li>
</ul>
<p>The layout is otherwise the same, apart from:</p>
<ul>
<li>The <code>compilers/</code> directory is ignored
</li>
<li>A <code>repo</code> file should be present, containing at least the line <code>opam-version: "2.0"</code>
</li>
<li>The indexes for serving over HTTP have been simplified, and <code>urls.txt</code> is no
longer needed. See <code>opam admin index --help</code>
</li>
<li>The <code>archives/</code> directory is no longer used. The cache now uses a different
format and is configured through the <code>repo</code> file, defaulting to <code>cache/</code> on
the same server. See <code>opam admin cache --help</code>
</li>
</ul>
<h2>Feedback</h2>
<p>Thanks for trying out the beta! Please let us have feedback, preferably to the
<a href="https://github.com/ocaml/opam/issues">opam tracker</a>; other options include the
<a href="mailto:opam-devel@lists.ocaml.org">opam-devel</a> list and #opam IRC channel on
Freenode.</p>
Release of Alt-Ergo 1.30 with experimental support for models generation https://ocamlpro.com/blog/2016_11_21_release_of_alt_ergo_1_30_with_experimental_support_for_models_generation2016-11-21T08:12:13Z2016-11-21T08:12:13Z
Mohamed Iguernlala
We have recently released a new (public up-to-date) version of Alt-Ergo. We focus in this article on its main new feature: experimental support for models generation. This work has been done with Frédéric Lang, an intern at OCamlPro from February to July 2016. The idea behind models generation The...<p>We have recently released a new (public up-to-date) version of Alt-Ergo. We focus in this article on its main new feature: experimental support for models generation. This work has been done with <em>Frédéric Lang</em>, an intern at OCamlPro from February to July 2016.</p>
<h3>The idea behind models generation</h3>
<p>The idea behind this feature is the following: when Alt-Ergo fails to prove the validity of a given formula <code>F</code>, it tries to compute and exhibit values for the terms of the problem that make the negation of <code>F</code> satisfiable. For instance, for the following example, written in Alt-Ergo's syntax,</p>
<pre><code class="language-ocaml">logic f : int -> int
logic a, b : int
goal g:
(a <> b and f(a) <= f(b) + 2*a) ->
false
</code></pre>
<p>a possible (counter) model is <code>a = 1</code>, <code>b = 3</code>, <code>f(a) = 0</code>, and <code>f(b) = 0</code>. The solution is called a <code>candidate</code> model because universally quantified formulas are, in general, not taken into account. We talk about <code>counter example</code> or <code>counter model</code> because the solution falsifies (i.e. satisfies the negation of) <code>F</code>.</p>
<h3>Basic usage</h3>
<p>Models generation in Alt-Ergo is non-intrusive. It is controlled via a new option called <code>-interpretation</code>. This option requires an integer argument. The default value <code>0</code> disables the feature, and:</p>
<ul>
<li><code>-interpretation 1</code> triggers a model computation and display at the end of Alt-Ergo's execution (i.e. just before returning <code>I don't know</code>),
</li>
<li><code>-interpretation 2</code> triggers a model computation before each axioms instantiation round,
</li>
<li><code>-interpretation 3</code> is the most aggressive. It triggers a model computation before each Boolean decision in the SAT.
</li>
</ul>
<p>For the two latest strategies, the model will be displayed at the end of the execution if the given formula is not proved. Note that a negative argument (-1, -2, or -3) will enable model computation as explained above, but the result will not be displayed (useful for automatic testing). In addition, if Alt-Ergo timeouts, the latest computed model, if any, will be shown.</p>
<h3>Advanced usage</h3>
<p>If you are not on Windows, you will also be able to use option <code>-interpretation-timelimit</code> to try to get a candidate model even when Alt-Ergo hits a given time limit (set with option <code>-timelimit</code>). The idea is simple: if Alt-Ergo fails to prove validity during the time allocated for "proof search", it will activate models generation and tries to get a counter example during the time allocated for that.</p>
<h3>Form of produced models</h3>
<p>Currently, models are printed in a syntax similar to SMT2's. We made this choice because Why3 already parses models in this format. For instance, Alt-Ergo outputs the following model for the example above:</p>
<pre><code class="language-ocaml">(
(a 1)
(b 3)
((f 3) 0)
((f 1) 0)
)
</code></pre>
<h3>Some known issues and limitations</h3>
<ul>
<li>
<p>For the moment, arrays are interpreted in term of the accesses that appear in the input formula, or that have been added internally by the decision procedure. In particular, a non-constrained array <code>arr</code> will probably be uninterpreted in the model (which would mean that it can have any well-typed value at any well-typed index).</p>
</li>
<li>
<p>Model generation may not terminate in presence of non-linear arithmetic. This is actually the case for the example
below (Alt-Ergo handles rationals, and there is no rational <code>x</code> such that <code>x * x = 2</code>). We plan to implement a <code>delta-completeness</code> like approach to stop splitting when intervals become really too small.
<code>goal g: forall x : real. x * x = 2. -> false</code>.</p>
</li>
<li>
<p>Currently, we generate a model for the content of the decision procedures part. Since the SAT's model is (in general) partial in Alt-Ergo, some ground terms may be missing. Moreover, no filtering with labels mechanism is done for the moment.</p>
</li>
</ul>
<h3>Alt-Ergo 1.30 vs 1.20 vs 1.01 releases</h3>
<p>A quick comparison between this new version, the latest private release (1.20), and the latest public release (1.01) on our internal benchmarks is shown below. You notice that this version is faster and discharges more formulas.</p>
<table style="width: 746px; height: 359px;">
<tbody>
<tr>
<td style="width: 186px; text-align: center;"></td>
<td style="width: 175px; text-align: center;">Alt-Ergo 1.01</td>
<td style="width: 176.317px; text-align: center;">Alt-Ergo 1.20</td>
<td style="width: 170.683px; text-align: center;">Alt-Ergo 1.30</td>
</tr>
<tr>
<td style="width: 186px; text-align: center;">Why3 benchmarks
(9752 VCs)</td>
<td style="width: 175px; text-align: center;">88.36%
7310 seconds</td>
<td style="width: 176.317px; text-align: center;">89.23%
7155 seconds</td>
<td style="width: 170.683px; text-align: center;">89.57%
4553 seconds</td>
</tr>
<tr>
<td style="width: 186px; text-align: center;">SPARK benchmarks
(14442 VCs)</td>
<td style="width: 175px; text-align: center;">78.05%
3872 seconds</td>
<td style="width: 176.317px; text-align: center;">78.42%
3042 seconds</td>
<td style="width: 170.683px; text-align: center;">78.56%
2909 seconds</td>
</tr>
<tr>
<td style="width: 186px; text-align: center;"> BWare benchmarks
(12828 VCs)</td>
<td style="width: 175px; text-align: center;">97.38%
6373 seconds</td>
<td style="width: 176.317px; text-align: center;">98.02%
6907 seconds</td>
<td style="width: 170.683px; text-align: center;">98.31%
4231 seconds</td>
</tr>
</tbody>
</table>
<h3>Download, install and bugs report</h3>
<p>You can learn more about Alt-Ergo and download the latest version on <a href="https://alt-ergo.ocamlpro.com/">the solver's website</a>. You can also install it <a href="https://opam.ocaml.org/packages/alt-ergo/alt-ergo.1.30">via the OPAM package manager</a>. For bugs report, we recommend <a href="https://github.com/OCamlPro/alt-ergo/issues">Alt-Ergo's issues tracker on Github</a>.</p>
<p>Don't hesitate to give your feedback to help us improving Alt-Ergo. You can also contribute with benchmarks to diversify and enrich our internal test-suite.</p>
opam-lib 1.3 availablehttps://ocamlpro.com/blog/2016_11_20_opam_lib_1.3_available2016-11-20T08:12:13Z2016-11-20T08:12:13Z
Louis Gesbert
opam-lib 1.3 The package for opam-lib version 1.3 has just been released in the official opam repository. There is no release of opam with version 1.3, but this is an intermediate version of the library that retains compatibility of the file formats with 1.2.2. The purpose of this release is twofold...<h2>opam-lib 1.3</h2>
<p>The package for opam-lib version 1.3 has just been released in the official
<code>opam</code> repository. There is no release of
<code>opam</code> with version 1.3, but this is an intermediate
version of the library that retains compatibility of the file formats with
1.2.2.</p>
<p>The purpose of this release is twofold:</p>
<ul>
<li><strong>provide some fixes and enhancements over opam-lib 1.2.2.</strong> For example, 1.3
has an enhanced <code>lint</code> function
</li>
<li><strong>be a step towards migration to opam-lib 2.0.</strong>
</li>
</ul>
<p><strong>This version is compatible with the current stable release of opam (1.2.2)</strong>,
but dependencies have been updated so that you are not (e.g.) stuck on an old
version of ocamlgraph.</p>
<p>Therefore, I encourage all maintainers of tools based on opam-lib to migrate to
1.3.</p>
<p>The respective APIs are available in html for
<a href="https://opam.ocaml.org/doc/1.2/api">1.2</a> and <a href="https://opam.ocaml.org/doc/1.3/api">1.3</a>.</p>
<blockquote>
<p><strong>A note on plugins</strong>: when you write opam-related tools, remember that by
setting <code>flags: plugin</code> in their definition and installing a binary named
<code>opam-toolname</code>, you will enable the users to install package <code>toolname</code> and
run your tool with a single <code>opam toolname</code> command.</p>
</blockquote>
<h3>Architectural changes</h3>
<p>If you need to migrate from 1.2 to 1.3, these tips may help:</p>
<ul>
<li>
<p>there are now 6 different ocamlfind sub-libraries instead of just 4: <code>format</code>
contains the handlers for opam types and file formats, has been split out from
the core library, while <code>state</code> handles the state of a given opam root and
switch and has been split from the <code>client</code> library.</p>
</li>
<li>
<p><code>OpamMisc</code> is gone and moved into the better organised <code>OpamStd</code>, with
submodules for <code>String</code>, <code>List</code>, etc.</p>
</li>
<li>
<p><code>OpamGlobals</code> is gone too, and its contents have been moved to:</p>
<ul>
<li><code>OpamConsole</code> for the printing, logging, and shell interface handling part
</li>
<li><code>OpamXxxConfig</code> modules for each of the libraries for handling the global
configuration variables. You should call the respective <code>init</code> functions,
with the options you want to set, for proper initialisation of the lib
options (and handling the <code>OPAMXXX</code> environment variables)
</li>
</ul>
</li>
<li>
<p><code>OpamPath.Repository</code> is now <code>OpamRepositoryPath</code>, and part of the
<code>repository</code> sub-library.</p>
</li>
</ul>
<h2>opam-lib 2.0 ?</h2>
<p>The development version of the opam-lib (<code>2.0~alpha5</code> as of writing) is already
available on opam. The name has been changed to provide a finer granularity, so
it can actually be installed concurrently -- but be careful not to confuse the
ocamlfind package names (<code>opam-lib.format</code> for 1.3 vs <code>opam-format</code> for 2.0).</p>
<p>The provided packages are:</p>
<ul>
<li><a href="https://opam.ocaml.org/packages/opam-file-format"><code>opam-file-format</code></a>: now
separated from the opam source tree, this has no dependencies and can be used
to parse and print the raw opam syntax.
</li>
<li><a href="https://opam.ocaml.org/packages/opam-core"><code>opam-core</code></a>: the basic toolbox
used by opam, which actually doesn't include the opam specific part. Includes
a tiny extra stdlib, the engine for running a graph of processes in parallel,
some system handling functions, etc. Depends on ocamlgraph and re only.
</li>
<li><a href="https://opam.ocaml.org/packages/opam-format"><code>opam-format</code></a>: defines opam
data types and their file i/o functions. Depends just on the two above.
</li>
<li><a href="https://opam.ocaml.org/packages/opam-core"><code>opam-solver</code></a>: opam's interface
with the <a href="https://opam.ocaml.org/packages/dose3">dose3</a> library and external
solvers.
</li>
<li><a href="https://opam.ocaml.org/packages/opam-repository"><code>opam-repository</code></a>: fetching
repositories and package sources from all handled remote types.
</li>
<li><a href="https://opam.ocaml.org/packages/opam-state"><code>opam-state</code></a>: handling of the
opam states, at the global, repository and switch levels.
</li>
<li><a href="https://opam.ocaml.org/packages/opam-client"><code>opam-client</code></a>: the client
library, providing the top-level operations (installing packages...), and CLI.
</li>
<li><a href="https://opam.ocaml.org/packages/opam-devel"><code>opam-devel</code></a>: this packages the
development version of the opam tool itself, for bootstrapping. You can
install it safely as it doesn't install the new <code>opam</code> in the PATH.
</li>
</ul>
<p>The new API can be also be <a href="https://opam.ocaml.org/doc/2.0/api">browsed</a> ;
please get in touch if you have trouble migrating.</p>
opam 2.0 preview release!https://ocamlpro.com/blog/2016_09_20_opam_2.0_preview_release2016-09-20T08:12:13Z2016-09-20T08:12:13Z
Louis Gesbert
We are pleased to announce a preview release for opam 2.0, with over 700 patches since 1.2.2. Version 2.0~alpha4 has just been released, and is ready to be more widely tested. This version brings many new features and changes, the most notable one being that OCaml compiler packages are no longer spe...<p>We are pleased to announce a preview release for <code>opam</code> 2.0, with over 700
patches since <a href="https://opam.ocaml.org/blog/opam-1-2-2-release/">1.2.2</a>. Version
<a href="https://github.com/ocaml/opam/releases/2.0-alpha4">2.0~alpha4</a> has just been
released, and is ready to be more widely tested.</p>
<p>This version brings many new features and changes, the most notable one being
that OCaml compiler packages are no longer special entities, and are replaced
by standard package definition files. This in turn means that <code>opam</code> users have
more flexibility in how switches are managed, including for managing non-OCaml
environments such as <a href="http://coq.io/opam/">Coq</a> using the same familiar tools.</p>
<h2>A few highlights</h2>
<p>This is just a sample, see the full
<a href="https://github.com/ocaml/opam/blob/2.0-alpha4/CHANGES">changelog</a> for more:</p>
<ul>
<li>
<p><strong>Sandboxed builds:</strong> Command wrappers can be configured to, for example,
restrict permissions of the build and install processes using Linux
namespaces, or run the builds within Docker containers.</p>
</li>
<li>
<p><strong>Compilers as packages:</strong> This brings many advantages for <code>opam</code> workflows,
such as being able to upgrade the compiler in a given switch, better tooling for
local compilers, and the possibility to define <code>coq</code> as a compiler or even
use <code>opam</code> as a generic shell scripting engine with dependency tracking.</p>
</li>
<li>
<p><strong>Local switches:</strong> Create switches within your projects for easier
management. Simply run <code>opam switch create <directory> <compiler></code> to get
started.</p>
</li>
<li>
<p><strong>Inplace build:</strong> Use <code>opam</code> to build directly from
your source directory. Ensure the package is pinned locally then run <code>opam install --inplace-build</code>.</p>
</li>
<li>
<p><strong>Automatic file tracking:</strong>: <code>opam</code> now tracks the files installed by packages
and is able to cleanly remove them when no existing files were modified.
The <code>remove:</code> field is now optional as a result.</p>
</li>
<li>
<p><strong>Configuration file:</strong> This can be used to direct choices at <code>opam init</code>
automatically (e.g. specific repositories, wrappers, variables, fetch
commands, or the external solver). This can be used to override all of <code>opam</code>'s
OCaml-related settings.</p>
</li>
<li>
<p><strong>Simpler library:</strong> the OCaml API is completely rewritten and should make it
much easier to write external tools and plugins. Existing tools will need to be
ported.</p>
</li>
<li>
<p><strong>Better error mitigation:</strong> Through clever ordering of the shell actions and
separation of <code>build</code> and <code>install</code>, most build failures can keep your current
installation intact, not resulting in removed packages anymore.</p>
</li>
</ul>
<h2>Roll out</h2>
<p>You are very welcome to try out the alpha, and report any issues. The repository
at <code>opam.ocaml.org</code> will remain in 1.2 format (with a 2.0 mirror at
<code>opam.ocaml.org/2.0~dev</code> in sync) until after the release is out, which means
the extensions can not be used there yet, but you are welcome to test on local
or custom repositories, or package pinnings. The reverse translation (2.0 to
1.2) is planned, to keep supporting 1.2 installations after that date.</p>
<p>The documentation for the new version is available at
http://opam.ocaml.org/doc/2.0/. This is still work in progress, so please do ask
if anything is unclear.</p>
<h2>Interface changes</h2>
<p>Commands <code>opam switch</code> and <code>opam list</code> have been rehauled for more consistency
and flexibility: the former won't implicitly create new switches unless called
with the <code>create</code> subcommand, and <code>opam list</code> now allows to combine filters and
finely specify the output format. They may not be fully backwards compatible, so
please check your scripts.</p>
<p>Most other commands have also seen fixes or improvements. For example, <code>opam</code>
doesn't forget about your set of installed packages on the first error, and the
new <code>opam install --restore</code> can be used to reinstall your selection after a
failed upgrade.</p>
<h2>Repository changes</h2>
<p>While users of <code>opam</code> 1.2 should feel at home with the changes, the 2.0 repository
and package formats are not compatible. Indeed, the move of the compilers to
standard packages implies some conversions, and updates to the relationships
between packages and their compiler. For example, package constraints like</p>
<pre><code class="language-shell-session">available: [ ocaml-version >= "4.02" ]
</code></pre>
<p>are now written as normal package dependencies:</p>
<pre><code class="language-shell-session">depends: [ "ocaml" {>= "4.02"} ]
</code></pre>
<p>To make the transition easier,</p>
<ul>
<li>upgrade of a custom repository is simply a matter of running <code>opam-admin upgrade-format</code> at its root;
</li>
<li>the official repository at <code>opam.ocaml.org</code> already has a 2.0 mirror, to which
you will be automatically redirected;
</li>
<li>packages definition are automatically converted when you pin a package.
</li>
</ul>
<p>Note that the <code>ocaml</code> package on the official repository is actually a wrapper
that depends on one of <code>ocaml-base-compiler</code>, <code>ocaml-system</code> or
<code>ocaml-variants</code>, which contain the different flavours of the actual compiler.
It is expected that it may only get picked up when requested by package
dependencies.</p>
<h2>Package format changes</h2>
<p>The <code>opam</code> package definition format is very similar to before, but there are
quite a few extensions and some changes:</p>
<ul>
<li>it is now mandatory to separate the <code>build:</code> and <code>install:</code> steps (this allows
tracking of installed files, better error recovery, and some optional security
features);
</li>
<li>the url and description can now optionally be included in the <code>opam</code> file
using the section <code>url {}</code> and fields <code>synopsis:</code> and <code>description:</code>;
</li>
<li>it is now possible to have dependencies toggled by globally-defined <code>opam</code>
variables (<em>e.g.</em> for a dependency needed on some OS only), or even rely on
the package information (<em>e.g.</em> have a dependency at the same version);
</li>
<li>the new <code>setenv:</code> field allows packages to export updates to environment
variables;
</li>
<li>custom fields <code>x-foo:</code> can be used for extensions and external tools;
</li>
<li>allow <code>"""</code> delimiters around unescaped strings
</li>
<li><code>&</code> is now parsed with higher priority than <code>|</code>
</li>
<li>field <code>ocaml-version:</code> can no longer be used
</li>
<li>the <code>remove:</code> field should not be used anymore for simple cases (just removing
files)
</li>
</ul>
<h2>Let's go then -- how to try it ?</h2>
<p>First, be aware that you'll be prompted to update your <code>~/.opam</code> to 2.0 format
before anything else, so if you value it, make a backup. Or just export
<code>OPAMROOT</code> to test the alpha on a temporary opam root.</p>
<p>Packages for opam 2.0 are already in the opam repository, so if you have a
working opam installation of opam (at least 1.2.1), you can bootstrap as easily
as:</p>
<pre><code class="language-shell-session">opam install opam-devel
</code></pre>
<p>This doesn't install the new opam to your PATH within the current opam root for
obvious reasons, so you can manually install it as e.g. "opam2" using:</p>
<pre><code class="language-shell-session">sudo cp $(opam config var "opam-devel:lib")/opam /usr/local/bin/opam2
</code></pre>
<p>You can otherwise install as usual:</p>
<ul>
<li>
<p>Using pre-built binaries (available for OSX and Linux x86, x86_64, armhf) and
our install script:</p>
<p>wget https://raw.github.com/ocaml/opam/2.0-alpha4-devel/shell/opam_installer.sh -O - | sh -s /usr/local/bin</p>
<p>Equivalently,
<a href="https://github.com/ocaml/opam/releases/2.0-alpha4">pick your version</a> and
download it to your PATH;</p>
</li>
<li>
<p>Building from our inclusive source tarball:
<a href="https://github.com/ocaml/opam/releases/download/2.0-alpha4/opam-full-2.0-alpha4.tar.gz">download here</a>
and build using <code>./configure && make lib-ext && make && make install</code> if you
have OCaml >= 4.01 already available, <code>make cold && make install</code> otherwise;</p>
</li>
<li>
<p>Or from <a href="https://github.com/ocaml/opam/tree/2.0-alpha4">source</a>, following the
included instructions from the README. Some files have been moved around, so
if your build fails after you updated an existing git clone, try to clean it
up (<code>git clean -fdx</code>).</p>
</li>
</ul>
ASM.OCamlhttps://ocamlpro.com/blog/2016_04_01_asm_ocaml2016-04-01T08:12:13Z2016-04-01T08:12:13Z
chambart
As you may know, there is a subset of Javascript that compiles efficiently to assembly used as backend of various compilers including a C compiler like emscripten. We'd like to present you in the same spirit how never to allocate in OCaml. Before starting to write anything, we must know how to find ...<p>As you may know, there is a subset of Javascript that compiles efficiently to assembly used as backend of various compilers including a C compiler like emscripten. We'd like to present you in the same spirit how never to allocate in OCaml.</p>
<p>Before starting to write anything, we must know how to find if a code is allocating. The best way currently is to look at the Cmm intermediate representation. We can see it by calling <code>ocamlopt</code> with the <code>-dcmm</code> option:</p>
<p><code>ocamlopt -c -dcmm test.ml</code></p>
<pre><code class="language-ocaml">let f x = (x,x)
</code></pre>
<p>Some excerpt from the output:</p>
<pre><code class="language-lisp">(function camlTest__f_4 (x_6/1204: val) (alloc 2048 x_6/1204 x_6/1204))
</code></pre>
<p>To improve readability, in this post we will clean a bit the variable names:</p>
<pre><code class="language-lisp">(function f (x: val) (alloc 2048 x x))
</code></pre>
<p>We see that the function f (named <code>camlTest__f_4</code>) is calling the <code>alloc</code> primitive, which obviously is an allocation. Here, this creates a size 2 block with tag 0 (2048 = 2 << 10 + 0) and containing two times the value <code>x_6/1204</code> which was <code>x</code> is the source. So we can detect if some code is allocating by doing <code>ocamlopt -c -dcmm test.ml 2&>1 | grep alloc</code> (obviously any function or variable named alloc will also appear).</p>
<p>It is possible to write some code that don't allocate (in the heap) at all, but what are the limitations ? For instance the omnipresent fibonacci function does not allocate:</p>
<pre><code class="language-ocaml">let rec fib = function
| 0 -> 0
| 1 -> 1
| n -> fib (n-1) + fib (n-2)
</code></pre>
<pre><code class="language-lisp">(function fib (n: val)
(if (!= n 1)
(if (!= n 3)
(let Paddint_arg (app "fib" (+ n -4) val)
(+ (+ (app "fib" (+ n -2) val) Paddint_arg) -1))
3)
1))
</code></pre>
<p>But quite a lot of common patterns do:</p>
<ul>
<li>Building structured values will allocate (tuple, records, sum types containing an element, ...)
</li>
<li>Using floats, int64, ... will allocate
</li>
<li>Declaring non-toplevel functions will allocate
</li>
</ul>
<p>Considering that, it can appear that it is almost impossible to write any non-trivial code without using those. But that's not completely true.</p>
<p>There are some exceptions to those rules, where some normally allocating constructions can be optimised away. We will explain how to exploit them to be able to write some more code.</p>
<h2>Local references</h2>
<p>Maybe the most important one is the case of local references.</p>
<pre><code class="language-ocaml">let fact n =
let result = ref 1 in
for i = 1 to n do
result := n * !result
done;
!result
</code></pre>
<p>To improve readability, this has been cleaned and demangled</p>
<pre><code class="language-lisp">(function fact (n: val)
(let (result 3)
(seq
(let (i 3 bound n)
(catch
(if (> i bound) (exit 3)
(loop
(assign result (+ (* (+ n -1) (>>s result 1)) 1))
(assign i (+ i 2))
(if (== i bound) (exit 3) [])
with(3) []))))
result)))
</code></pre>
<p>You can notice that allocation of the reference disappeared. The modifications were replaced by assignments (the <code>assign</code> operator) to the result variable. This transformation can happen when a reference is never used anywhere else than as an argument of the ! and := operator and does not appear in the closure of any local function like:</p>
<pre><code class="language-ocaml">let counter () =
let count = ref 0 in
let next () = count := !count + 1; !count in
next
</code></pre>
<p>This won't happen in this case since count is in the closure of next.</p>
<h2>Unboxing</h2>
<p>The float, int32, int64 and nativeint types do not fit in the generic representation of values that can be stored in the OCaml heap, so they are boxed. This means that they are allocated and there is an annotation to tell the garbage collector to skip their content. So using them in general will allocate. But an important optimization is that local uses (some cases that obviously won't go in the heap) are 'unboxed', i.e. not allocated.</p>
<h2>If/match couple</h2>
<p>Some 4.03 change also improve some cases of branching returning tuples</p>
<pre><code class="language-ocaml">let positive_difference x y =
let (min, max) =
if x < y then
(x, y)
else
(y, x)
in
max - min
</code></pre>
<pre><code class="language-lisp">(function positive_difference (x: val y: val)
(catch
(if (< x y) (exit 7 y x)
(exit 7 x y))
with(7 max min) (+ (- max min) 1)))
</code></pre>
<h2>Control flow</h2>
<p>You can do almost any control flow like that, but this is quite
unpractical and is still limited in many ways.</p>
<p>If you don't want to write everything as for and while loops, you can
write functions for your control flow, but to prevent allocation you
will have to refrain from doing a few things. For instance, you should
not pass record or tupple as argument to functions of course, you
should pass each field separately as a different argument.</p>
<p>But what happens when you want to return multiple values ? There is
some ongoing project to try to optimise the allocations of some of
those cases away, but currently you can't. Really ? NO !</p>
<h2>Returning multiple values</h2>
<p>If you bend a bit your mind, you may see that returning from a
function is almost the same thing as calling one... Or you can make it
that way. So let's transform our code in 'Continuation Passing Style'</p>
<p>For instance, let's write a function that finds the minimum and the maximum of a list. That could be written like that:</p>
<pre><code class="language-ocaml">let rec fold_left f init l =
match l with
| [] -> init
| h :: t ->
let acc = f init h in
fold_left f acc t
let keep_min_max (min, max) v =
let min = if v < min then v else min in
let max = if v > max then v else max in
min, max
let find_min_max l =
match l with
| [] -> invalid_arg "find_min_max"
| h :: t ->
fold_left keep_min_max (h, h) t
</code></pre>
<h3>Continuation Passing Style</h3>
<p>Transforming it to continuation passing style (CPS) replace every function return by a tail-call to a function representing 'what happens after'. This function is usually called a continuation and a convention is to use the variable name 'k' for it.</p>
<p>Let's start simply by turning only the keep_min_max function into continuation passing style.</p>
<pre><code class="language-ocaml">let keep_min_max (min, max) v k =
let min = if v < min then v else min in
let max = if v > max then v else max in
k (min, max)
val keep_min_max : 'a * 'a -> 'a -> ('a * 'a -> 'b) -> 'b
</code></pre>
<p>That's all here. But of course we need to modify a bit the function calling it.</p>
<pre><code class="language-ocaml">let rec fold_left f init l =
match l with
| [] -> init
| h :: t ->
let k acc =
fold_left f acc t
in
f init h k
val fold_left : ('a -> 'b -> ('a -> 'a) -> 'a) -> 'a -> 'b list -> 'a
val find_min_max : 'a list -> 'a * 'a
</code></pre>
<p>Here instead of calling f then recursively calling fold_left, we prepare what we will do after calling f (that is calling fold_left) and then we call f with that continuation. find_min_max is unchanged and still has the same type.</p>
<p>But we can continue turning things in CPS, and a full conversion would result in:</p>
<pre><code class="language-ocaml">let rec fold_left_k f init l k =
match l with
| [] -> k init
| h :: t ->
let k acc =
fold_left_k f acc t k
in
f init h k
val fold_left_k : ('a -> 'b -> ('a -> 'c) -> 'c) -> 'a -> 'b list -> ('a -> 'c) -> 'c
let keep_min_max_k (min, max) v k =
let min = if v < min then v else min in
let max = if v > max then v else max in
k (min, max)
val keep_min_max_k : 'a * 'a -> 'a -> ('a * 'a -> 'b) -> 'b
let find_min_max_k l k =
match l with
| [] -> invalid_arg "find_min_max"
| h :: t ->
fold_left_k keep_min_max (h, h) t k
val find_min_max_k : 'a list -> ('a * 'a -> 'b) -> 'b
let find_min_max l =
find_min_max_k l (fun x -> x)
val find_min_max : 'a list -> 'a * 'a
</code></pre>
<h3>Where rectypes matter for performance reasons</h3>
<p>That's nice, we only have tail calls now, but we are not done removing allocation yet of course. We now need to get rid of the allocation of the closure in fold_left_k and of the couples in keep_min_max_k. For that, we need to pass everything that should be allocated as argument:</p>
<pre><code class="language-ocaml">let rec fold_left_k2 f init1 init2 l k =
match l with
| [] -> k init1 init2
| h :: t ->
f init1 init2 h t fold_left_k2 k
val fold_left_k2 :
('b -> 'c -> 'd -> 'd list -> 'a -> ('b -> 'c -> 'e) -> 'e) ->
'b -> 'c -> 'd list -> ('b -> 'c -> 'e) -> 'e as 'a
let rec keep_min_max_k2 = fun min max v k_arg k k2 ->
let min = if v < min then v else min in
let max = if v > max then v else max in
k keep_min_max_k2 min max k_arg k2
val keep_min_max_k2 :
'b -> 'b -> 'b -> 'c -> ('a -> 'b -> 'b -> 'c -> 'd -> 'e) -> 'd -> 'e as 'a
let find_min_max_k2 l k =
match l with
| [] -> invalid_arg "find_min_max"
| h :: t ->
fold_left_k2 keep_min_max_k2 h h t k
val find_min_max_k2 : 'a list -> ('a -> 'a -> 'b) -> 'b
</code></pre>
<p>For some reason, we now need to activate 'rectypes' to allow functions to have a recursive type (the 'as 'a') but we managed to completely get rid of allocations.</p>
<pre><code class="language-lisp">(function fold_left_k2 (f: val init1: val init2: val l: val k: val)
(if (!= l 1)
(app "caml_apply6" init1 init2 (load val l) l "fold_left_k2" k f val))
(app "caml_apply2" init1 init2 k val)))
(function keep_min_max_k2 (min: val max: val v: val k: val k: val k2: val)
(let
(min
(if (!= (extcall "caml_lessthan" v min val) 1)
v min)
max
(if (!= (extcall "caml_greaterthan" v max val) 1)
v max))
(app "caml_apply5" "keep_min_max_k2" min max k k2 k val)))
(function find_min_max_k2 (l: val k: val)
(if (!= l 1)
(let h (load val l)
(app "fold_left_k2" "keep_min_max_k2" h h t k val))
(raise "exception")))
</code></pre>
<p>So we can turn return points into call points and get rid of a lot of potential allocations like that. But of course there is no way to handle functions passing or returning sum types like that ! Well, I'm not so sure.</p>
<h2>Sum types</h2>
<p>Let's try with the option type for instance:</p>
<pre><code class="language-ocaml">type 'a option =
| None
| Some of 'a
let add_an_option_value opt v =
match opt with
| None -> v
| Some n -> n + v
let n1 = add_an_option_value (Some 3) 4
let n2 = add_an_option_value None 4
</code></pre>
<p>The case of the sum type tells us if there is some more values that we can get and their type. But there is another way to associate some type information with an actual value: GADTs</p>
<pre><code class="language-ocaml">type ('a, 'b) option_case =
| None' : ('a, unit) option_case
| Some' : ('a, 'a) option_case
let add_an_option_value (type t) (opt: (int, t) option_case) (n:t) v =
match opt with
| None' -> v
| Some' -> n + v
let n1 = add_an_option_value Some' 3 4
let n2 = add_an_option_value None' () 4
</code></pre>
<p>And voilà, no allocation anymore !</p>
<p>Combining that with the CPS transformation can get you quite far without allocating !</p>
<h2>Manipulating Memory</h2>
<p>Now that we can manage almost any control flow without allocating, we need also to manipulate some values. That's the point where we simply suggest to use the same approach as ASM.js: allocate a single large bigarray (this is some kind of malloc), consider integers as pointers and you can do anything. We won't go into too much details here as this would require another post for that topic.</p>
<p>For some low level packed bitfield manipulation you can have a look at <a href="https://gist.github.com/chambart/a0382fb4d908a3e45744">some more tricks</a></p>
<h2>Conclusion</h2>
<p>So if you want to write non allocating code in OCaml, turn everything in CPS, add additional arguments everywhere, turn your sum types in unboxed GADTs, manipulate a single large bigarrays. And enjoy !</p>
<h1>Comments</h1>
<p>Gaetan Dubreil (3 April 2016 at 11 h 16 min):</p>
<blockquote>
<p>Thank you for this attractive and informative post.
Just to be sure, is it not ‘t’ rather than ‘l’ that must be past to the fold_left function?
You said “we only have tail calls now” but I don’t see any none tail calls in the first place, am I wrong?</p>
</blockquote>
<p>Pierre Chambart (4 April 2016 at 14 h 48 min):</p>
<blockquote>
<p>There where effectively some typos. Thanks for noticing.</p>
<p>There is one non-tail call in fold_left: the call to f. But effectively the recursion is tail.</p>
</blockquote>
<p>kantien (25 May 2016 at 13 h 57 min):</p>
<blockquote>
<p>Interesting article, but i have one question. Can we say, from the proof theory point of view, that turning the code in CPS style not to allocate is just an application of the Gentzen’s cut-elimination theorem ?
I explain in more details this interpretation : if we have a proof P1 of the proposition A and a proof P2 of the proposition A ⇒ B, we can produce a proof P3 of proposition B by applying the cut rule or modus ponens, but the theorem says that we can eliminate the use of cut rule and produce a direct proof P4 of the proposition B. But modus ponens (or cut rule) is just the rule for typing function application : if f has type ‘a -> ‘b and x has type ‘a then f x has type ‘b. And so the cut-elimination theorem says that we can produce an object of type ‘b without allocate an object of type ‘a (this is not necessary to produce the P1 proof, or more exactly this is not necessary to put the P1’s conclusion in the environment in order to use it as a premise of the P2 proof ). Am I right ?</p>
</blockquote>
<p>jdxu (4 January 2021 at 11 h 36 min):</p>
<blockquote>
<p>Very useful article. BTW, is there any document/tutorial/article about cmm syntax?</p>
</blockquote>
Signing the OPAM repositoryhttps://ocamlpro.com/blog/2015_06_05_signing_the_opam_repository2015-06-05T08:12:13Z2015-06-05T08:12:13Z
Louis Gesbert
Hannes Mehnert
NOTE (September 2016): updated proposal from OCaml 2016 workshop is available, including links to prototype implementation. This is an initial proposal on signing the OPAM repository. Comments and discussion are expected on the platform mailing-list. The purpose of this proposal is to enable a secur...<blockquote>
<p>NOTE (September 2016): updated proposal from OCaml 2016 workshop is
<a href="https://github.com/hannesm/conex-paper/blob/master/paper.pdf">available</a>,
including links to prototype implementation.</p>
</blockquote>
<blockquote>
<p>This is an initial proposal on signing the OPAM repository. Comments and
discussion are expected on the
<a href="http://lists.ocaml.org/listinfo/platform">platform mailing-list</a>.</p>
</blockquote>
<p>The purpose of this proposal is to enable a secure distribution of
OCaml packages. The package repository does not have to be trusted if
package developers sign their releases.</p>
<p>Like <a href="http://www.python.org/dev/peps/pep-0458/">Python's pip</a>, <a href="https://corner.squareup.com/2013/12/securing-rubygems-with-tuf-part-1.html">Ruby's gems</a> or more recently
<a href="http://www.well-typed.com/blog/2015/04/improving-hackage-security/">Haskell's hackage</a>, we are going to implement a flavour of The
Upgrade Framework (<a href="http://theupdateframework.com/">TUF</a>). This is good because:</p>
<ul>
<li>it has been designed by people who <a href="http://google-opensource.blogspot.jp/2009/03/thandy-secure-update-for-tor.html">know the stuff</a> much better than
us
</li>
<li>it is built upon a threat model including many kinds of attacks, and there are
some non-obvious ones (see the <a href="https://raw.githubusercontent.com/theupdateframework/tuf/develop/docs/tuf-spec.txt">specification</a>, and below)
</li>
<li>it has been thoroughly reviewed
</li>
<li>following it may help us avoid a lot of mistakes
</li>
</ul>
<p>Importantly, it doesn't enforce any specific cryptography, allowing us to go
with what we have <a href="http://opam.ocaml.org/packages/nocrypto/nocrypto.0.3.1/">at the moment</a> in native OCaml, and evolve later,
<em>e.g.</em> by allowing ed25519.</p>
<p>There are several differences between the goal of TUF and opam, namely
TUF distributes a directory structure containing the code archive,
whereas opam distributes metadata about OCaml packages. Opam uses git
(and GitHub at the moment) as a first class citizen: new packages are
submitted as pull requests by developers who already have a GitHub
account.</p>
<p>Note that TUF specifies the signing hierarchy and the format to deliver and
check signatures, but allows a lot of flexibility in how the original files are
signed: we can have packages automatically signed on the official repository, or
individually signed by developers. Or indeed allow both, depending on the
package.</p>
<p>Below, we tried to explain the specifics of our implementation, and mostly the
user and developer-visible changes. It should be understandable without prior
knowledge of TUF.</p>
<p>We are inspired by <a href="https://github.com/commercialhaskell/commercialhaskell/wiki/Git-backed-Hackage-index-signing-and-distribution">Haskell's adjustments</a> (and
<a href="https://github.com/commercialhaskell/commercialhaskell/wiki/Package-signing-detailed-propsal">e2e</a>) to TUF using a git repository for packages. A
signed repository and signed packages are orthogonal. In this
proposal, we aim for both, but will describe them independently.</p>
<h2>Threat model</h2>
<ul>
<li>
<p>An attacker can compromise at least one of the package distribution
system's online trusted keys.</p>
</li>
<li>
<p>An attacker compromising multiple keys may do so at once or over a
period of time.</p>
</li>
<li>
<p>An attacker can respond to client requests (MITM or server
compromise) during downloading of the repository, a package, and
also while uploading a new package release.</p>
</li>
<li>
<p>An attacker knows of vulnerabilities in historical versions of one or
more packages, but not in any current version (protecting against
zero-day exploits is emphatically out-of-scope).</p>
</li>
<li>
<p>Offline keys are safe and securely stored.</p>
</li>
</ul>
<p>An attacker is considered successful if they can cause a client to
build and install (or leave installed) something other than the most
up-to-date version of the software the client is updating. If the
attacker is preventing the installation of updates, they want clients
to not realize there is anything wrong.</p>
<h2>Attacks</h2>
<ul>
<li>
<p>Arbitrary package: an attacker should not be able to provide a package
they created in place of a package a user wants to install (via MITM
during package upload, package download, or server compromise).</p>
</li>
<li>
<p>Rollback attacks: an attacker should not be able to trick clients into
installing software that is older than that which the client
previously knew to be available.</p>
</li>
<li>
<p>Indefinite freeze attacks: an attacker should not be able to respond
to client requests with the same, outdated metadata without the
client being aware of the problem.</p>
</li>
<li>
<p>Endless data attacks: an attacker should not be able to respond to
client requests with huge amounts of data (extremely large files)
that interfere with the client's system.</p>
</li>
<li>
<p>Slow retrieval attacks: an attacker should not be able to prevent
clients from being aware of interference with receiving updates by
responding to client requests so slowly that automated updates never
complete.</p>
</li>
<li>
<p>Extraneous dependencies attacks: an attacker should not be able to
cause clients to download or install software dependencies that are
not the intended dependencies.</p>
</li>
<li>
<p>Mix-and-match attacks: an attacker should not be able to trick clients
into using a combination of metadata that never existed together on
the repository at the same time.</p>
</li>
<li>
<p>Malicious repository mirrors: should not be able to prevent updates
from good mirrors.</p>
</li>
<li>
<p>Wrong developer attack: an attacker should not be able to upload a new
version of a package for which they are not the real developer.</p>
</li>
</ul>
<h2>Trust</h2>
<p>A difficult problem in a cryptosystem is key distribution. In TUF and
this proposal, a set of root keys are distributed with opam. A
threshold of these root keys needs to sign (transitively) all keys
which are used to verify opam repository and its packages.</p>
<h3>Root keys</h3>
<p>The root of trust is stored in a set of root keys. In the case of the official
opam OCaml repository, the public keys are to be stored in the opam source,
allowing it to validate the whole trust chain. The private keys will be held by
the opam and repository maintainers, and stored password-encrypted, securely
offline, preferably on unplugged storage.</p>
<p>They are used to sign all the top-level keys, using a quorum. The quorum has
several benefits:</p>
<ul>
<li>the compromise of a number of root keys less than the quorum is harmless
</li>
<li>it allows to safely revoke and replace a key, even if it was lost
</li>
</ul>
<p>The added cost is more maintenance burden, but this remains small since these
keys are not often used (only when keys are going to expire, were compromised or
in the event new top-level keys need to be added).</p>
<p>The initial root keys could be distributed as such:</p>
<ul>
<li>Louis Gesbert, opam maintainer, OCamlPro
</li>
<li>Anil Madhavapeddy, main repository maintainer, OCaml Labs
</li>
<li>Thomas Gazagnaire, main repository maintainer, OCaml Labs
</li>
<li>Grégoire Henry, OCamlPro safekeeper
</li>
<li>Someone in the OCaml team ?
</li>
</ul>
<p>Keys will be set with an expiry date so that one expires each year in turn,
leaving room for smooth rollover.</p>
<p>For other repositories, there will be three options:</p>
<ul>
<li>no signatures (backwards compatible ?), <em>e.g.</em> for local network repositories.
This should be allowed, but with proper warnings.
</li>
<li>trust on first use: get the root keys on first access, let the user confirm
their fingerprints, then fully trust them.
</li>
<li>let the user manually supply the root keys.
</li>
</ul>
<h3>End-to-end signing</h3>
<p>This requires the end-user to be able to validate a signature made by the
original developer. There are two trust paths for the chain of trust (where
"→" stands for "signs for"):</p>
<ul>
<li>(<em>high</em>) root keys →
repository maintainer keys → (signs individually)
package delegation + developer key →
package files
</li>
<li>(<em>low</em>) root keys →
snapshot key → (signs as part of snapshot)
package delegation + developer key →
package files
</li>
</ul>
<p>It is intended that packages may initially follow the <em>low</em> trust path, adding
as little burden and delay as possible when adding new packages, and may then be
promoted to the <em>high</em> path with manual intervention, after verification, from
repository maintainers. This way, most well-known and widely used packages will
be provided with higher trust, and the scope of an attack on the low trust path
would be reduced to new, experimental or little-used packages.</p>
<h3>Repository signing</h3>
<p>This provides consistent, up-to-date snapshots of the repository, and protects
against a whole different class of attacks than end-to-end signing (<em>e.g.</em>
rollbacks, mix-and-match, freeze, etc.)</p>
<p>This is done automatically by a snapshot bot (might run on the repository
server), using the <em>snapshot key</em>, which is signed directly by the root keys,
hence the chain of trust:</p>
<ul>
<li>root keys →
snapshot key →
commit-hash
</li>
</ul>
<p>Where "commit-hash" is the head of the repository's git repository (and thus a
valid cryptographic hash of the full repository state, as well as its history)</p>
<h4>Repository maintainer (RM) keys</h4>
<p>Repository maintainers hold the central role in monitoring the repository and
warranting its security, with or without signing. Their keys (called <em>targets
keys</em> in the TUF framework) are signed directly by the root keys. As they have a
high security potential, in order to reduce the consequences of a compromise, we
will be requiring a quorum for signing sensitive operations</p>
<p>These keys are stored password-encrypted on the RM computers.</p>
<h4>Snapshot key</h4>
<p>This key is held by the <em>snapshot bot</em> and signed directly by the root keys. It
is used to guarantee consistency and freshness of repository snapshots, and does
so by signing a git commit-hash and a time-stamp.</p>
<p>It is held online and used by the snapshot bot for automatic signing: it has
lower security than the RM keys, but also a lower potential: it can not be used
directly to inject malicious code or metadata in any existing package.</p>
<h4>Delegate developer keys</h4>
<p>These keys are used by the package developers for end-to-end signing. They can
be generated locally as needed by new packagers (<em>e.g.</em> by the <code>opam-publish</code>
tool), and should be stored password-encrypted. They can be added to the
repository through pull-requests, waiting to be signed (i) as part of snapshots
(which also prevents them to be modified later, but we'll get to it) and (ii)
directly by RMs.</p>
<h4>Initial bootstrap</h4>
<p>We'll need to start somewhere, and the current repository isn't signed. An
additional key, <em>initial-bootstrap</em>, will be used for guaranteeing integrity of
existing, but yet unverified packages.</p>
<p>This is a one-go key, signed by the root keys, and that will then be destroyed.
It is allowed to sign for packages without delegation.</p>
<h3>Trust chain and revocation</h3>
<p>In order to build the trust chain, the opam client downloads a <code>keys/root</code> key
file initially and before every update operation. This file is signed by the
root keys, and can be verified by the client using its built-in keys (or one of
the ways mentioned above for unofficial repositories). It must be signed by a
quorum of known root keys, and contains the comprehensive set of root, RM,
snapshot and initial bootstrap keys: any missing keys are implicitly revoked.
The new set of root keys is stored by the opam client and used instead of the
built-in ones on subsequent runs.</p>
<p>Developer keys are stored in files <code>keys/dev/<id></code>, self-signed, possibly RM
signed (and, obviously, snapshot-signed). The conditions of their verification,
removal or replacement are included in our validation of metadata update (see
below).</p>
<h2>File formats and hierarchy</h2>
<h3>Signed files and tags</h3>
<p>The files follow the opam syntax: a list of <em>fields</em> <code>fieldname:</code> followed by
contents. The format is detailed in <a href="https://opam.ocaml.org/doc/Manual.html#Generalfileformat">opam's documentation</a>.</p>
<p>The signature of files in opam is done on the canonical textual representation,
following these rules:</p>
<ul>
<li>any existing <code>signature:</code> field is removed
</li>
<li>one field per line, ending with a newline
</li>
<li>fields are sorted lexicographically by field name
</li>
<li>newlines, backslashes and double-quotes are escaped in string literals
</li>
<li>spaces are limited to one, and to these cases: after field leaders
<code>fieldname:</code>, between elements in lists, before braced options, between
operators and their operands
</li>
<li>comments are erased
</li>
<li>fields containing an empty list, or a singleton list containing an empty
list, are erased
</li>
</ul>
<p>The <code>signature:</code> field is a list with elements in the format of string triplets
<code>[ "<keyid>" "<algorithm>" "<signature>" ]</code>. For example:</p>
<pre><code>opam-version: "1.2"
name: "opam"
signature: [
[ "louis.gesbert@ocamlpro.com" "RSASSA-PSS" "048b6fb4394148267df..." ]
]
</code></pre>
<p>Signed tags are git annotated tags, and their contents follow the same rules. In
this case, the format should contain the field <code>commit:</code>, pointing to the
commit-hash that is being signed and tagged.</p>
<h3>File hierarchy</h3>
<p>The repository format is changed by the addition of:</p>
<ul>
<li>a directory <code>keys/</code> at the root
</li>
<li>delegation files <code>packages/<pkgname>/delegate</code> and
<code>compilers/<patchname>.delegate</code>
</li>
<li>signed checksum files at <code>packages/<pkgname>/<pkgname>.<version>/signature</code>
</li>
</ul>
<p>Here is an example:</p>
<pre><code class="language-shell-session">repository root /
|--packages/
| |--pkgname/
| | |--delegation - signed by developer, repo maintainer
| | |--pkgname.version1/
| | | |--opam
| | | |--descr
| | | |--url
| | | `--signature - signed by developer1
| | `--pkgname.version2/ ...
| `--pkgname2/ ...
|--compilers/
| |--version/
| | |--version+patch/
| | | |--version+patch.comp
| | | |--version+patch.descr
| | | `--version+patch.signature
| | `--version+patch2/ ...
| |--patch.delegate
| |--patch2.delegate
| `--version2/ ...
`--keys/
|--root
`--dev/
|--developer1-email - signed by developer1,
`--developer2-email ... and repo maint. once verified
</code></pre>
<p>Keys are provided in different files as string triplets
<code>[ [ "keyid" "algo" "key" ] ]</code>. <code>keyid</code> must not conflict with any
previously-defined keys, and <code>algo</code> may be "rsa" and keys encoded in PEM format,
with further options available later.</p>
<p>For example, the <code>keys/root</code> file will have the format:</p>
<pre><code class="language-shell-session">date=2015-06-04T13:53:00Z
root-keys: [ [ "keyid" "{expire-date}" "algo" "key" ] ]
snapshot-keys: [ [ "keyid" "algo" "key" ] ]
repository-maintainer-keys: [ [ "keyid" "algo" "key" ] ]
</code></pre>
<p>This file is signed by current <em>and past</em> root keys -- to allow clients to
update. The <code>date:</code> field provides further protection against rollback attacks:
no clients may accept a file with a date older than what they currently have.
Date is in the ISO 8601 standard with 0 UTC offset, as suggested in TUF.</p>
<h4>Delegation files</h4>
<p><code>/packages/pkgname/delegation</code> delegates ownership on versions of package
<code>pkgname</code>. The file contains version constraints associated with keyids, <em>e.g.</em>:</p>
<pre><code class="language-shell-session">name: pkgname
delegates: [
"thomas@gazagnaire.org"
"louis.gesbert@ocamlpro.com" {>= "1.0"}
]
</code></pre>
<p>The file is signed:</p>
<ul>
<li>by the original developer submitting it
</li>
<li>or by a developer previously having delegation for all versions, for changes
</li>
<li>or directly by repository maintainers, validating the delegation, and
increasing the level of trust
</li>
</ul>
<p>Every key a developer delegates trust to must also be signed by the developer.</p>
<p><code>compilers/patch.delegate</code> files follow a similar format (we are considering
changing the hierarchy of compilers to match that of packages, to make things
simpler).</p>
<p>The <code>delegates:</code> field may be empty: in this case, no packages by this name are
allowed on the repository. This may be useful to mark deletion of obsolete
packages, and make sure a new, different package doesn't take the same name by
mistake or malice.</p>
<h4>Package signature files</h4>
<p>These guarantee the integrity of a package: this includes metadata and the
package archive itself (which may, or may not, be mirrored on the the opam
repository server).</p>
<p>The file, besides the package name and version, has a field <code>package-files:</code>
containing a list of files below <code>packages/<pkgname>/<pkgname>.<version></code>
together with their file sizes in bytes and one or more hashes, prefixed by their
kind, and a field <code>archive:</code> containing the same details for the upstream
archive. For example:</p>
<pre><code class="language-shell-session">name: pkgname
version: pkgversion
package-files: [
"opam" {901 [ sha1 "7f9bc3cc8a43bd8047656975bec20b578eb7eed9" md5 "1234567890" ]}
"descr" {448 [ sha1 "8541f98524d22eeb6dd669f1e9cddef302182333" ]}
"url" {112 [ sha1 "0a07dd3208baf4726015d656bc916e00cd33732c" ]}
"files/ocaml.4.02.patch" {17243 [ sha1 "b3995688b9fd6f5ebd0dc4669fc113c631340fde" ]}
]
archive: [ 908460 [ sha1 "ec5642fd2faf3ebd9a28f9de85acce0743e53cc2" ] ]
</code></pre>
<p>This file is signed either:</p>
<ul>
<li>by the <code>initial-bootstrap</code> key, only initially
</li>
<li>by a delegate key (<em>i.e.</em> by a delegated-to developer)
</li>
<li>by a quorum of repository maintainers
</li>
</ul>
<p>The latter is needed to hot-fix packages on the repository: repository
maintainers often need to do so. A quorum is still required, to prevent a single
RM key compromise from allowing arbitrary changes to every package. The quorum
is not initially required to sign a delegation, but is, consistently, required
for any change to an existing, signed delegation.</p>
<p>Compiler signature files <code><version>+<patch>.signature</code> are similar, with fields
<code>compiler-files</code> containing checksums for <code><version>+<patch>.*</code>, the same field
<code>archive:</code> and an additional optional field <code>patches:</code>, containing the sizes and
hashes of upstream patches used by this compiler.</p>
<p>If the delegation or signature can't be validated, the package or compiler is
ignored. If any file doesn't correspond to its size or hashes, it is ignored as
well. Any file not mentioned in the signature file is ignored.</p>
<h2>Snapshots and linearity</h2>
<h3>Main snapshot role</h3>
<p>The snapshot key automatically adds a <code>signed</code> annotated tag to the top of the
served branch of the repository. This tag contains the commit-hash and the
current timestamp, effectively ensuring freshness and consistency of the full
repository. This protects against mix-and-match, rollback and freeze attacks.</p>
<p>The <code>signed</code> annotated tag is deleted and recreated by the snapshot bot, after
checking the validity of the update, periodically and after each change.</p>
<h3>Linearity</h3>
<p>The repository is served using git: this means, not only the latest version, but
the full history of changes are known. This as several benefits, among them,
incremental downloads "for free"; and a very easy way to sign snapshots. Another
good point is that we have a working full OCaml implementation.</p>
<p>We mentioned above that we use the snapshot signatures not only for repository
signing, but also as an initial guarantee for submitted developer's keys and
delegations. One may also have noticed, in the above, that we sign for
delegations, keys etc. individually, but without a bundle file that would ensure
no signed files have been maliciously removed.</p>
<p>These concerns are all addressed by a <em>linearity condition</em> on the repository's
git: the snapshot bot does not only check and sign for a given state of the
repository, it checks every individual change to the repository since the last
well-known, signed state: patches have to follow from that git commit
(descendants on the same branch), and are validated to respect certain
conditions: no signed files are removed or altered without signature, etc.</p>
<p>Moreover, this check is also done on clients, every time they update: it is
slightly weaker, as the client don't update continuously (an attacker may have
rewritten the commits since last update), but still gives very good guarantees.</p>
<p>A key and delegation that have been submitted by a developer and merged, even
without RM signature, are signed as part of a snapshot: git and the linearity
conditions allow us to guarantee that this delegation won't be altered or
removed afterwards, even without an individual signature. Even if the repository
is compromised, an attacker won't be able to roll out malicious updates breaking
these conditions to clients.</p>
<p>The linearity invariants are:</p>
<ol>
<li>no key, delegation, or package version (signed files) may be removed
</li>
<li>a new key is signed by itself, and optionally by a RM
</li>
<li>a new delegation is signed by the delegate key, optionally by a RM. Signing
keys must also sign the delegate keys
</li>
<li>a new package or package version is signed by a valid key holding a valid
delegation for this package version
</li>
<li>keys can only be modified with signature from the previous key or a quorum
of RM keys
</li>
<li>delegations can only be modified with signature by a quorum of RMs, or
possibly by a former delegate key (without version constraints) in case
there was previously no RM signature
</li>
<li>any package modification is signed by an appropriate delegate key, or by a
quorum of RM keys
</li>
</ol>
<p>It is sometimes needed to do operations, like key revocation, that are not
allowed by the above rules. These are enabled by the following additional rules,
that require the commit including the changes to be signed by a quorum of
repository maintainers using an annotated tag:</p>
<ol>
<li>package or package version removal
</li>
<li>removal (revocation) of a developer key
</li>
<li>removal of a package delegation (it's in general preferable to leave an
empty delegation)
</li>
</ol>
<p>Changes to the <code>keys/root</code> file, which may add, modify or revoke keys for root,
RMs and snapshot keys is verified in the normal way, but needs to be handled for
checking linearity since it decides the validity of RM signatures. Since this
file may be needed before we can check the <code>signed</code> tag, it has its own
timestamp to prevent rollback attacks.</p>
<p>In case the linearity invariant check fail:</p>
<ul>
<li>on the GitHub repository, this is marked and the RMs are advised not to merge
(or to complete missing tag signatures)
</li>
<li>on the clients, the update is refused, and the user informed of what's going
on (the repository has likely been compromised at that point)
</li>
<li>on the repository (checks by the snapshot bot), update is stalled and all
repository maintainers immediately warned. To recover, the broken commits
(between the last <code>signed</code> tag and master) need to be amended.
</li>
</ul>
<h2>Work and changes involved</h2>
<h3>General</h3>
<p>Write modules for key handling ; signing and verification of opam files.</p>
<p>Write the git synchronisation module with linearity checks.</p>
<h3>opam</h3>
<p>Rewrite the default HTTP repository synchronisation module to use git fetch,
verify, and git pull. This should be fully transparent, except:</p>
<ul>
<li>in the cases of errors, of course
</li>
<li>when registering a non-official repository
</li>
<li>for some warnings with features that disable signatures, like source pinning
(probably only the first time would be good)
</li>
</ul>
<p>Include the public root keys for the default repository, and implement
management of updated keys in <code>~/.opam/repo/name</code>.</p>
<p>Handle the new formats for checksums and non-repackaged archives.</p>
<p>Allow a per-repository security threshold (<em>e.g.</em> allow all, allow only signed
packages, allow only packages signed by a verified key, allow only packages
signed by their verified developer). It should be easy but explicit to add a
local network, unsigned repository. Backends other than git won't be signed
anyway (local, rsync...).</p>
<h3>opam-publish</h3>
<p>Generate keys, handle locally stored keys, generate <code>signature</code> files, handle
signing, submit signatures, check delegation, submit new delegation, request
delegation change (might require repository maintainer intervention if an RM
signed the delegation), delete developer, delete package.</p>
<p>Manage local keys. Probably including re-generating, and asking for revocation.</p>
<h3>opam-admin</h3>
<p>Most operations on signatures and keys will be implemented as independent
modules (as to be usable from <em>e.g.</em> unikernels working on the repository). We
should also make them available from <code>opam-admin</code>, for testing and manual
management. Special tooling will also be needed by RMs.</p>
<ul>
<li>fetch the archives (but don't repackage as <code>pkg+opam.tar.gz</code> anymore)
</li>
<li>allow all useful operations for repository maintainers (maybe in a different
tool ?):
<ul>
<li>manage their keys
</li>
<li>list and sign changed packages directly
</li>
<li>list and sign waiting delegations to developer keys
</li>
<li>validate signatures, print reports
</li>
<li>sign tags, including adding a signature to an existing tag to meet the
quorum
</li>
<li>list quorums waiting to be met on a given branch
</li>
</ul>
</li>
<li>generate signed snapshots (same as the snapshot bot, for testing)
</li>
</ul>
<h3>Signing bots</h3>
<p>If we don't want to have this processed on the publicly visible host serving the
repository, we'll need a mechanism to fetch the repository, and submit the
signed tag back to the repository server.</p>
<p>Doing this through mirage unikernels would be cool, and provide good isolation.
We could imagine running this process regularly:</p>
<ul>
<li>fetch changes from the repository's git (GitHub)
</li>
<li>check for consistency (linearity)
</li>
<li>generate and sign the <code>signed</code> tag
</li>
<li>push tag back to the release repository
</li>
</ul>
<h3>Travis</h3>
<p>All security information and check results should be available to RMs before
they make the decision to merge a commit to the repository. This means including
signature and linearity checks in a process running on Travis, or similarly on
every pull-request to the repository, and displaying the results in the GitHub
tracker.</p>
<p>This should avoid most cases where the snapshot bot fails the validation,
leaving it stuck (as well as any repository updates) until the bad commits are
rewritten.</p>
<h2>Some more detailed scenarios</h2>
<h3><code>opam init</code> and <code>update</code> scenario</h3>
<p>On <code>init</code>, the client would clone the repository and get to the <code>signed</code> tag,
get and check the associated <code>keys/root</code> file, and validate the <code>signed</code> tag
according to the new keyset. If all goes well, the new set of root, RM and
snapshot keys is registered.</p>
<p>Then all files' signatures are checked following the trust chains, and copied to
the internal repository mirror opam will be using (<code>~/.opam/repo/<name></code>). When
a package archive is needed, the download is done either from the repository, if
the file is mirrored, or from its upstream, in both cases with known file size
and upper limit: the download is stopped if going above the expected size, and
the file removed if it doesn't match both.</p>
<p>On subsequent updates, the process is the same except that a fetch operation is
done on the existing clone, and that the repository is forwarded to the new
<code>signed</code> tag only if linearity checks passed (and the update is aborted
otherwise).</p>
<h3><code>opam-publish</code> scenario</h3>
<ul>
<li>The first time a developer runs <code>opam-publish submit</code>, a developer key is
generated, and stored locally.
</li>
<li>Upon <code>opam-publish submit</code>, the package is signed using the key, and the
signature is included in the submission.
</li>
<li>If the key is known, and delegation for this package matches, all is good
</li>
<li>If the key is not already registered, it is added to <code>/keys/dev/</code> within the
pull-request, self-signed.
</li>
<li>If there is no delegation for the package, the <code>/packages/pkgname/delegation</code>
file is added, delegating to the developer key and signed by it.
</li>
<li>If there is an existing delegation that doesn't include the auhor's key,
this will require manual intervention from the repository managers. We may yet
submit a pull-request adding the new key as delegate for this package, and ask
the repository maintainers -- or former developers -- to sign it.
</li>
</ul>
<h2>Security analysis</h2>
<p>We claim that the above measures give protection against:</p>
<ul>
<li>
<p>Arbitrary packages: if an existing package is not signed, it is not installed
(or even visible) to the user. Anybody can submit new unclaimed packages (but,
in the current setting, still need GitHub write access to the repository, or
to bypass GitHub's security).</p>
</li>
<li>
<p>Rollback attacks: git updates must follow the currently known <code>signed</code> tag. if
the snapshot bot detects deletions of packages, it refuses to sign, and
clients double-check this. The <code>keys/root</code> file contains a timestamp.</p>
</li>
<li>
<p>Indefinite freeze attacks: the snapshot bot periodically signs the <code>signed</code>
tag with a timestamp, if a client receives a tag older than the expected age
it will notice.</p>
</li>
<li>
<p>Endless data attacks: we rely on the git protocol and this does not defend
against endless data. Downloading of package archive (of which the origin may
be any mirror), though, is protected. The scope of the attack is mitigated in
our setting, because there are no unattended updates: the program is run
manually, and interactively, so the user is most likely to notice.</p>
</li>
<li>
<p>Slow retrieval attacks: same as above.</p>
</li>
<li>
<p>Extraneous dependencies attacks: metadata is signed, and if the signature does
not match, it is not accepted.</p>
<blockquote>
<p>NOTE: the <code>provides</code> field -- yet unimplemented, see the document in
<code>opam/doc/design</code> -- could provide a vector in this case, by advertising a
replacement for a popular package. Additional measures will be taken when
implementing the feature, like requiring a signature for the provided
package.</p>
</blockquote>
</li>
<li>
<p>Mix-and-match attacks: the repository has a linearity condition, and partial
repositories are not possible.</p>
</li>
<li>
<p>Malicious repository mirrors: if the signature does not match, reject.</p>
</li>
<li>
<p>Wrong developer attack: if the developer is not in the delegation, reject.</p>
</li>
</ul>
<h3>GitHub repository</h3>
<p>Is the link between GitHub (opam-repository) and the signing bot special?
If there is a MITM on this link, they can add arbitrary new packages, but
due to missing signatures only custom universes. No existing package can
be altered or deleted, otherwise consistency condition above does not hold
anymore and the signing bot will not sign.</p>
<p>Certainly, the access can be frozen, thus the signing bot does not receive
updates, but continues to sign the old repository version.</p>
<h3>Snapshot key</h3>
<p>If the snapshot key is compromised, an attacker is able to:</p>
<ul>
<li>
<p>Add arbitrary (non already existing) packages, as above.</p>
</li>
<li>
<p>Freeze, by forever re-signing the <code>signed</code> tag with an updated timestamp.</p>
</li>
</ul>
<p>Most importantly, the attacker won't be able to tamper with existing packages.
This hudgely reduces the potential of an attack, even with a compromised
snapshot key.</p>
<p>The attacks above would also require either a MITM between the repository and
the client, or a compromise of the opam repository: in the latter case, since
the linearity check is reproduces even from the clients:</p>
<ul>
<li>any tamper could be detected very quickly, and measures taken.
</li>
<li>a freeze would be detected as soon as a developer checks that their
package is really online. That currently happens
<a href="https://github.com/ocaml/opam-repository/pulse">several times a day</a>.
</li>
</ul>
<p>The repository would then just have to be reset to before the attack, which git
makes as easy as it can get, and the holders of the root keys would sign a new
<code>/auth/root</code>, revoking the compromised snapshot key and introducing a new one.</p>
<p>In the time before the signing bot can be put back online with the new snapshot
key -- <em>i.e.</em> the breach has been found and fixed -- a developer could manually
sign time-stamped tags before they expire (<em>e.g.</em> once a day) so as not to hold
back updates.</p>
<h3>Repository Maintainer keys</h3>
<p>Repository maintainers are powerful, they can modify existing opam files and
sign them (as hotfix), introduce new delegations for packages, etc.).</p>
<p>However, by requiring a quorum for sensitive operations, we limit the scope of a
single RM key compromise to the validation of new developer keys or delegations
(which should be the most common operation done by RMs): this enables to raise
the level of security of the new, malicious packages but otherwise doesn't
change much from what can be done with just access to the git repository.</p>
<p>A further compromise of a quorum of RM keys would allow to remove or tamper with
any developer key, delegation or package: any of these amounts to being able to
replace any package with a compromised version. Cleaning up would require
replacing all but the root keys, and resetting the repository to before any
malicious commit.</p>
<h2>Difference to TUF</h2>
<ul>
<li>we use git
</li>
<li>thus get linearity "for free"
</li>
<li>and already have a hash over the entire repository
</li>
<li>TUF provides a mechanism for delegation, but it's both heavier and not
expressive enough for what we wanted -- delegate to packages directly.
</li>
<li>We split in lots more files, and per-package ones, to fit with and nicely
extend the git-based workflow that made the success of opam. The original TUF
would have big json files signing for a lot of files, and likely to conflict.
Both developers and repository maintainers should be able to safely work
concurrently without issue. Signing bundles in TUF gives the additional
guarantee that no file is removed without proper signature, but this is
handled by git and signed tags.
</li>
<li>instead of a single file with all signed packages by a specific developer,
one file per package
</li>
</ul>
<h3>Differences to Haskell:</h3>
<ul>
<li>use TUF keys, not gpg
</li>
<li>e2e signing
</li>
</ul>
Reduced Memory Allocations with ocp-memprof https://ocamlpro.com/blog/2015_05_18_reduced_memory_allocations_with_ocp_memprof2015-05-18T08:12:13Z2015-05-18T08:12:13Z
Çagdas Bozman
In this blog post, we explain how ocp-memprof helped us identify a piece of code in Alt-Ergo that needed to be improved. Simply put, a function that merges two maps was performing a lot of unnecessary allocations, negatively impacting the garbage collector's activity. A simple patch allowed us to pr...<p>In this blog post, we explain how <code>ocp-memprof</code> helped us identify a piece of code in Alt-Ergo that needed to be improved. Simply put, a function that merges two maps was performing a lot of unnecessary allocations, negatively impacting the garbage collector's activity. A simple patch allowed us to prevent these allocations, and thus speed up Alt-Ergo's execution.</p>
<h3>The Story</h3>
<p>Il all started with a challenging example coming from an industrial user of <a href="https://alt-ergo.ocamlpro.com/">Alt-Ergo</a>, our SMT solver. It was proven by Alt-Ergo in approximately 70 seconds. This seemed abnormnally long and needed to be investigated. Unfortunately, all our tests with different options (number of triggers, case-split analysis, …) and different plugins (satML plugin, profiling plugin, fm-simplex plugin) of Alt-Ergo failed to improve the resolution time. We then profiled an execution using <code>ocp-memprof</code> to understand the memory behavior of this example.</p>
<h3>Profiling an Execution with <code>ocp-memprof</code></h3>
<p>As usual, profiling an OCaml application with <code>ocp-memprof</code> is very simple (see the <a href="https://memprof.typerex.org/free-version.php?action=documentation">user manual</a> for more details). We just compiled Alt-Ergo in the OPAM switch for <code>ocp-memprof</code> (version <code>4.01.0+ocp1</code>) and executed the following command:</p>
<pre><code class="language-shell-session">$ ocp-memprof -exec ./ae-4.01.0+ocp1-public-without-patch pb-many-GCs.why
</code></pre>
<p>The execution above triggers 612 garbage collections in about 114 seconds. The analysis of the generated dumps produces the evolution graph below. We notice on the graph that:</p>
<ul>
<li>we have approximately 10 MB of hash-tables allocated since the beginning of the execution, which is expected;
</li>
<li>the second most allocated data in the major heap are maps, and they keep growing during the execution of Alt-Ergo.
</li>
</ul>
<p>We are not able to precisely identify the allocation origins of the maps in this graph (maps are generic structures that are intensively used in Alt-Ergo). To investigate further, we wanted to know if some global value was abnormally retaining a lot of memory, or if some (non recursive-terminal) iterator was causing some trouble when applied on huge data structures. For that, we extended the analysis with the <code>--per-root</code> option to focus on the memory graph of the last dump. This is done by executing the following command, where 4242 is the PID of the process launched by <code>ocp-memprof --exec</code> in the previous command:</p>
<pre><code class="language-shell-session">$ ocp-memprof -load 4242 -per-root 611
</code></pre>
<p><img src="/blog/assets/img/graph_before_mini.png" alt="" />
<img src="/blog/assets/img/screenshot_per_root_before_mini.png" alt="" /></p>
<p>The per-root graph (above, on the right) gives more interesting information. When expanding the <code>stack</code> node and sorting the sixth column in decreasing order, we notice that:</p>
<ul>
<li>a bunch of these maps are still in the stack: the item <code>Map_at_192_offset_1</code> in the first column means that most of the memory is retained by the <code>fold</code> function, at line 192 of the <code>Map</code> module (resolution of stack frames is only available in the commercial version of <code>ocp-memprof</code>);
</li>
<li>the "kind" column corresponding to <code>Map_at_192_offset_1</code> gives better information. It provides the signature of the function passed to fold. This information is already provided by <a href="https://memprof.typerex.org/">the online version</a>.
</li>
</ul>
<pre><code class="language-cpp">Uf.Make(Uf.??Make.X).LX.t ->;
Explanation.t ->;
Explanation.t Map.Make(Uf.Make(Uf.??Make.X).LX).t ->;
Explanation.t Map.Make(Uf.Make(Uf.??Make.X).LX).t
</code></pre>
<p>This information allows us to see the precise origin of the allocation: the map of module <code>LX</code> used in <a href="https://github.com/OCamlPro/alt-ergo/blob/master/src/theories/uf.ml">uf.ml</a>. Lucky us, there is only one <code>fold</code> function of <code>LX</code>'s maps in the <code>Uf</code> module with the same type.</p>
<h3>Optimizing the Code</h3>
<p>Thanks to the information provided by the <code>--per-root</code> option, we identified the code responsible for this behavior:</p>
<pre><code class="language-ocaml">(*** function extracted from module uf.ml ***)
module MapL = Map.Make(LX)
let update_neqs r1 r2 dep env =
let merge_disjoint_maps l1 ex1 mapl =
try
let ex2 = MapL.find l1 mapl in
let ex = Ex.union (Ex.union ex1 ex2) dep in
raise (Inconsistent (ex, cl_extract env))
with Not_found ->;
MapL.add l1 (Ex.union ex1 dep) mapl
in
let nq_r1 = lookup_for_neqs env r1 in
let nq_r2 = lookup_for_neqs env r2 in
let mapl = MapL.fold merge_disjoint_maps nq_r1 nq_r2 in
MapX.add r2 mapl (MapX.add r1 mapl env.neqs)
</code></pre>
<p>Roughly speaking, the function above retrieves two maps <code>nq_r1</code> and <code>nq_r2</code> from <code>env</code>, and folds on the first one while providing the second map as an accumulator. The local function <code>merge_disjoint_maps</code> (passed to fold) raises <code>Exception.Inconsistent</code> if the original maps were not disjoint. Otherwise, it adds the current binding (after updating the corresponding value) to the accumulator. Finally, the result <code>mapl</code> of the fold is used to update the values of <code>r1</code> and <code>r2</code> in <code>env.neqs</code>.</p>
<p>After further debugging, we observed that one of the maps (<code>nq_r1</code> and <code>nq_r2</code>) is always empty in our situation. A straightforward fix consists in testing whether one of these two maps is empty. If it is the case, we simply return the other map. Here is the corresponding code:</p>
<pre><code class="language-ocaml">(*** first patch: testing if one of the maps is empty ***)
…
let mapl =
if MapL.is_empty nq_r1 then nq_r2
else
if MapL.is_empty nq_r2 then nq_r1
else MapL.fold_merge merge_disjoint_maps nq_r1 nq_r2
…
</code></pre>
<p>Of course, a more generic solution should not just test for emptiness, but should fold on the smallest map. In the second patch below, we used a slightly modified version of OCaml's maps that exports the <code>height</code> function (implemented in constant time). This way, we always fold on the smallest map while providing the biggest one as an accumulator.</p>
<pre><code class="language-ocaml">(*** second (better) patch : folding on the smallest map ***)
…
let small, big =
if MapL.height nq_r1 > MapL.height nq_r2 then nq_r1, nq_r2
else nq_r2, nq_r1
in
let mapl = MapL.fold merge_disjoint_maps small big in
…
</code></pre>
<h3>Checking the Efficiency of our Patch</h3>
<p>Curious to see the result of the patch ? We regenerate the evolution and memory graphs of the patched code (fix 1), and we noticed:</p>
<ul>
<li>a better resolution time: from 69 seconds to 16 seconds;
</li>
<li>less garbage collection : from 53,000 minor collections to 19,000;
</li>
<li>a smaller memory footprint : from 26 MB to 24 MB;
</li>
</ul>
<p><img src="/blog/assets/img/graph_after_mini.png" alt="" />
<img src="/blog/assets/img/screenshot_per_root_after_mini.png" alt="" /></p>
<h3>Conclusion</h3>
<p>We show in this post that <code>ocp-memprof</code> can also be used to optimize your code, not only by decreasing memory usage, but by improving the speed of your application. The interactive graphs are online in our gallery of examples if you want to see and explore them (<a href="https://memprof.typerex.org/users/5a198a7f26b9b9d6f402276e16818a66/2015-05-15_15-32-21_48c9e783500e896444f998eb001fff4c_4242/">without the patch</a> and <a href="https://memprof.typerex.org/users/5a198a7f26b9b9d6f402276e16818a66/2015-05-15_16-13-22_4174baa4b9b5d8845653e04307b010a9_4530/">with the patch</a>).</p>
<table class="tableau2">
<tbody>
<tr>
<td></td>
<th>AE</th>
<th>AE + patch</th>
<th>Remarks</th>
</tr>
</tbody>
<tbody>
<tr>
<th>4.01.0</th>
<td>69.1 secs</td>
<td>16.4 secs</td>
<td>substantial improvement on the example</td>
</tr>
<tr>
<th>4.01.0+ocp1</th>
<td>76.3 secs</td>
<td>17.1 secs</td>
<td>when using the patched version of Alt-Ergo</td>
</tr>
<tr>
<th>dumps generation</th>
<td>114.3 secs (+49%)</td>
<td>17.6 secs (+2.8%)</td>
<td>(important) overhead when dumping<br />
memory snapshots</td>
</tr>
<tr>
<th># dumps (major collections)</th>
<td>612 GCs</td>
<td>31 GCs</td>
<td>impressive GC activity without the patch</td>
</tr>
<tr>
<th>dumps analysis<br />
(online ocp-memprof)</th>
<td>759 secs</td>
<td>24.3 secs</td>
<td></td>
</tr>
<tr>
<th>dumps analysis<br />
(commercial ocp-memprof)</th>
<td>153 secs</td>
<td>3.7 secs</td>
<td>analysis with commercial ocp-memprof is<br />
**~ x5 faster** than public version (above)</td>
</tr>
<tr>
<th>AE memory footprint</th>
<td>26 MB</td>
<td>24 MB</td>
<td>memory consumption is comparable</td>
</tr>
<tr>
<th>minor collections</th>
<td>53K</td>
<td>19K</td>
<td>fewer minor GCs thanks to the patch</td>
</tr>
</tbody>
</table>
Do not hesitate to use `ocp-memprof` on your applications. Of course, all feedback and suggestions are welcome, just [email](mailto:contact@ocamlpro.com) us !
<p>More information:</p>
<ul>
<li>Homepage: <a href="https://memprof.typerex.org/">https://memprof.typerex.org/</a>
</li>
<li>Gallery of examples: <a href="https://memprof.typerex.org/gallery.php">https://memprof.typerex.org/gallery.php</a>
</li>
<li>Free Version: <a href="https://memprof.typerex.org/free-version.php">https://memprof.typerex.org/free-version.php</a>
</li>
<li>Commercial Version: <a href="https://memprof.typerex.org/commercial-version.php">https://memprof.typerex.org/commercial-version.php</a>
</li>
<li>Report a Bug: <a href="https://memprof.typerex.org/report-a-bug.php">https://memprof.typerex.org/report-a-bug.php</a>
</li>
</ul>
OPAM 1.2.2 Releasedhttps://ocamlpro.com/blog/2015_05_07_opam_1.2.2_released2015-05-07T08:12:13Z2015-05-07T08:12:13Z
Louis Gesbert
OPAM 1.2.2 has just been released. This fixes a few issues over 1.2.1 and brings a couple of improvements, in particular better use of the solver to keep the installation as up-to-date as possible even when the latest version of a package can not be installed. Upgrade from 1.2.1 (or earlier) See the...<p><a href="https://github.com/ocaml/opam/releases/tag/1.2.2">OPAM 1.2.2</a> has just been
released. This fixes a few issues over 1.2.1 and brings a couple of improvements,
in particular better use of the solver to keep the installation as up-to-date as
possible even when the latest version of a package can not be installed.</p>
<h3>Upgrade from 1.2.1 (or earlier)</h3>
<p>See the normal
<a href="https://opam.ocaml.org/doc/Install.html">installation instructions</a>: you should
generally pick up the packages from the same origin as you did for the last
version -- possibly switching from the official repository packages to the ones
we provide for your distribution, in case the former are lagging behind.</p>
<p>There are no changes in repository format, and you can roll back to earlier
versions in the 1.2 branch if needed.</p>
<h3>Improvements</h3>
<ul>
<li>Conflict messages now report the original version constraints without
translation, and they have been made more concise in some cases
</li>
<li>Some new <code>opam lint</code> checks, <code>opam lint</code> now numbers its warnings and may
provide script-friendly output
</li>
<li>Feature to <strong>automatically install plugins</strong>, e.g. <code>opam depext</code> will prompt
to install <code>depext</code> if available and not already installed
</li>
<li><strong>Priority to newer versions</strong> even when the latest can't be installed (with a
recent solver only. Before, all non-latest versions were equivalent to the
solver)
</li>
<li>Added <code>opam list --resolve</code> to list a consistent installation scenario
</li>
<li>Be cool by default on errors in OPAM files, these don't concern end-users and
packagers and CI now have <code>opam lint</code> to check them.
</li>
</ul>
<h3>Fixes</h3>
<ul>
<li>OSX: state cache got broken in 1.2.1, which could induce longer startup times.
This is now fixed
</li>
<li><code>opam config report</code> has been fixed to report the external solver properly
</li>
<li><code>--dry-run --verbose</code> properly outputs all commands that would be run again
</li>
<li>Providing a simple path to an aspcud executable as external solver (through
options or environment) works again, for backwards-compatibility
</li>
<li>Fixed a fd leak on solver calls (thanks Ivan Gotovchits)
</li>
<li><code>opam list</code> now returns 0 when no packages match but no pattern was supplied,
which is more helpful in scripts relying on it to check dependencies.
</li>
</ul>
wxOCaml, camlidl and Class Moduleshttps://ocamlpro.com/blog/2015_04_13_yes_ocp_memprof_scanf2015-04-13T08:12:13Z2015-04-13T08:12:13Z
Çagdas Bozman
A few months ago, a memory leak in the Scanf.fscanf function of OCaml’s standard library has been reported on the OCaml mailing list. The following “minimal” example reproduces this misbehavior: Let us see how to identify the origin of the leak and fix it with our OCaml memory profiler. Instal...<p>A few months ago, a memory leak in the <code>Scanf.fscanf</code> function of OCaml’s standard library has been reported on the OCaml mailing list. The following “minimal” example reproduces this misbehavior:</p>
<pre><code class="language-ocaml">for i = 0 to 100_000 do
let ic = open_in “some_file.txt” in
Scanf.fscanf ic “%s” (fun _s -&amp;gt; ());
close_in ic
done;;
read_line ();;
</code></pre>
<p>Let us see how to identify the origin of the leak and fix it with our OCaml memory profiler.</p>
<h2>Installing the OCaml Memory Profiler</h2>
<p>We first install our modified OCaml compiler and the memory profiling tool thanks to the following opam commands:</p>
<pre><code class="language-shell-session">$ opam remote add memprof http://memprof.typerex.org/opam
$ opam update
</code></pre>
<pre><code class="language-shell-session">$ opam switch 4.01.0+ocp1-20150202
$ opam install ocp-memprof
$ eval opam config env
</code></pre>
<p>That’s all ! Installation is done after only five (opam) commands.</p>
<h2>Compiling and Executing the Example</h2>
<p>The second step consists in compiling the example above and profiling it. This is simply achieved with the commands:</p>
<pre><code class="language-shell-session">$ ocamlopt scanf_leak.ml -o scanf.x
</code></pre>
<pre><code class="language-shell-session">$ ocp-memprof –exec scanf.x
</code></pre>
<p>You may notice that no instrumentation of the source is needed to enable profiling.</p>
<h2>Visualizing the Results</h2>
<p>In the last command above, <code>scanf.x</code> dumps a lot of information (related to memory occupation) during its execution. Our “OCaml Memory Profiler” then analyzes these dumps, and generates a “human readable” graph that shows the evolution of memory consumption after each OCaml garbage collection. Concretely, this yields the graph below (the interactive graph generated by <code>ocp-memprof</code> is <a href="http://memprof.typerex.org/users/04db0c7fb9232a0829e862d5bb2801fb/2015-03-05_10-54-04_29a20719bf5482a878293ed0effa010f_17729/index.html">available here</a>). As you can see, memory consumption is growing abnormally and exceed 240Mb ! Note that we stopped the <code>scanf.x</code> after 90 seconds.</p>
<h2>Playing With (Some of) ocp-memprof Capabilities</h2>
<p>ocp-memprof allows to group and show data contained in the graph w.r.t. several criteria. For instance, data are grouped by “Modules” in the capture below. This allows us to deduce that most allocations are performed in the <code>Scanf</code> and <code>Buffer</code> modules.</p>
<p>In addition to aggregation capabilities, the <a href="http://memprof.typerex.org/users/04db0c7fb9232a0829e862d5bb2801fb/2015-03-05_10-54-04_29a20719bf5482a878293ed0effa010f_17729/index.html">interactive graph</a> generated by ocp-memprof also allows to “zoom” on particular data. For instance, by looking at <code>Scanf</code>, we obtain the graph below that shows the different functions that are allocating in this module. We remark that the most allocating function is <code>Scanf.Scanning.from_ic</code>. Let us have a look to this function.</p>
<p>From Profiling Graphs to Source Code
The code of the function <code>from_ic</code>, that is responsible for most of the allocation in <code>Scanf</code>, is the following:</p>
<pre><code class="language-ocaml">let memo_from_ic =
let memo = ref [] in
(fun scan_close_ic ic ->
try
List.assq ic !memo
with
| Not_found ->
let ib = from_ic scan_close_ic (From_channel ic) ic in
memo := (ic, ib) :: !memo;
ib)
;;
</code></pre>
<p>It looks like that the leak is caused by the <code>memo</code> list that associates a lookahead buffer, resulting from the call to <code>from_ic</code>, with each input channel.</p>
<h2>Patching the Code</h2>
<p>Benoit Vaugon quickly sent a patch based on weak-pointers that seems to solve the problem. He modified the code as follows:</p>
<ul>
<li>he put the key in a weak set in order to test if it is gone;
</li>
<li>he created a pair that stores the key and the associated value (<code>PairMemo</code>);
</li>
<li>he put this pair in a weak set (<code>IcMemo</code>), where it will be reclaimed at the next GC because;
</li>
<li>he added a finalizer on the pair that adds again the pair in the weak set at each GC
</li>
</ul>
<pre><code class="language-ocaml">let memo_from_ic =
let module IcMemo = Weak.Make (
struct
type t = Pervasives.in_channel
let equal ic1 ic2 = ic1 = ic2
let hash ic = Hashtbl.hash ic
end)
in
let module PairMemo = Weak.Make (
struct
type t = Pervasives.in_channel * in_channel
let equal (ic1, _) (ic2, _) = ic1 = ic2
let hash (ic, _) = Hashtbl.hash ic
end)
in
let ic_memo = IcMemo.create 16 in
let pair_memo = PairMemo.create 16 in
let rec finaliser ((ic, _) as pair) =
if IcMemo.mem ic_memo ic then (
Gc.finalise finaliser pair;
PairMemo.add pair_memo pair) in
(fun scan_close_ic ic ->
try snd (PairMemo.find pair_memo (ic, stdin)) with
| Not_found ->
let ib = from_ic scan_close_ic (From_channel ic) ic in
let pair = (ic, ib) in
IcMemo.add ic_memo ic;
Gc.finalise finaliser pair;
PairMemo.add pair_memo pair;
ib)
;;
</code></pre>
<h2>Checking the Fixed Version</h2>
<p>Curious to see the memory behavior after applying this patch ? The graph below shows the memory consumption of the patched version of <code>Scanf</code> module. Again, the interactive version is <a href="http://memprof.typerex.org/users/04db0c7fb9232a0829e862d5bb2801fb/2015-03-05_13-41-42_a5217e67db7d057bc68baeb1d45d7ce0_28767/index.html">available here</a>. After each iteration of the <code>for-loop</code>, the memory is released as expected and memory consumption does not exceed 2.1Mb during each <code>for-loop</code> iteration.</p>
<h2>Conclusion</h2>
<p>This example is online in our gallery of examples if you want to see and explore the graphs (<a href="http://memprof.typerex.org/users/04db0c7fb9232a0829e862d5bb2801fb/2015-03-05_10-54-04_29a20719bf5482a878293ed0effa010f_17729/">with the leak</a> and <a href="http://memprof.typerex.org/users/04db0c7fb9232a0829e862d5bb2801fb/2015-03-05_13-41-42_a5217e67db7d057bc68baeb1d45d7ce0_28767/">without the leak</a>).</p>
<p>Do not hesitate to use <code>ocp-memprof</code> on your applications. Of course, all feedback and suggestions on using <code>ocp-memprof</code> are welcome, just send us an email !</p>
<p>More information:</p>
<ul>
<li>Homepage: <a href="http://memprof.typerex.org/">http://memprof.typerex.org/</a>
</li>
<li>Usage: <a href="http://memprof.typerex.org/free-version.php">http://memprof.typerex.org/free-version.php</a>
</li>
<li>Support: <a href="http://memprof.typerex.org/report-a-bug.php">http://memprof.typerex.org/report-a-bug.php</a>
</li>
<li>Gallery of examples: <a href="http://memprof.typerex.org/gallery.php">http://memprof.typerex.org/gallery.php</a>
</li>
<li>Commercial: <a href="http://memprof.typerex.org/commercial-version.php">http://memprof.typerex.org/commercial-version.php</a>
</li>
</ul>
OPAM 1.2.1 Releasedhttps://ocamlpro.com/blog/2015_03_18_opam_1.2.1_released2015-03-18T08:12:13Z2015-03-18T08:12:13Z
Louis Gesbert
OPAM 1.2.1 has just been released. This patch version brings a number of fixes and improvements over 1.2.0, without breaking compatibility. Upgrade from 1.2.0 (or earlier) See the normal installation instructions: you should generally pick up the packages from the same origin as you did for the last...<p><a href="https://github.com/ocaml/opam/releases/tag/1.2.1">OPAM 1.2.1</a> has just been
released. This patch version brings a number of fixes and improvements
over 1.2.0, without breaking compatibility.</p>
<h3>Upgrade from 1.2.0 (or earlier)</h3>
<p>See the normal
<a href="https://opam.ocaml.org/doc/Install.html">installation instructions</a>: you should
generally pick up the packages from the same origin as you did for the last
version -- possibly switching from the official repository packages to the ones
we provide for your distribution, in case the former are lagging behind.</p>
<h3>What's new</h3>
<p>No huge new features in this point release -- which means you can roll back
to 1.2.0 in case of problems -- but lots going on under the hood, and quite a
few visible changes nonetheless:</p>
<ul>
<li>The engine that processes package builds and other commands in parallel has
been rewritten. You'll notice the cool new display but it's also much more
reliable and efficient. Make sure to set <code>jobs:</code> to a value greater than 1 in
<code>~/.opam/config</code> in case you updated from an older version.
</li>
<li>The install/upgrade/downgrade/remove/reinstall actions are also processed in a
better way: the consequences of a failed actions are minimised, when it used
to abort the full command.
</li>
<li>When using version control to pin a package to a local directory without
specifying a branch, only the tracked files are used by OPAM, but their
changes don't need to be checked in. This was found to be the most convenient
compromise.
</li>
<li>Sources used for several OPAM packages may use <code><name>.opam</code> files for package
pinning. URLs of the form <code>git+ssh://</code> or <code>hg+https://</code> are now allowed.
</li>
<li><code>opam lint</code> has been vastly improved.
</li>
</ul>
<p>... and much more</p>
<p>There is also a <a href="https://opam.ocaml.org/doc/Manual.html">new manual</a> documenting
the file and repository formats.</p>
<h3>Fixes</h3>
<p>See <a href="https://github.com/ocaml/opam/blob/1.2.1/CHANGES">the changelog</a> for a
summary or
<a href="https://github.com/ocaml/opam/issues?q=is%3Aissue+closed%3A%3E2014-10-16+closed%3A%3C2015-03-05+">closed issues</a>
in the bug-tracker for an overview.</p>
<h3>Experimental features</h3>
<p>These are mostly improvements to the file formats. You are welcome to use them,
but they won't be accepted into the
<a href="https://github.com/ocaml/opam-repository">official repository</a> until the next
release.</p>
<ul>
<li>New field <code>features:</code> in opam files, to help with <code>./configure</code> scripts and
documenting the specific features enabled in a given build. See the
<a href="https://github.com/ocaml/opam/blob/master/doc/design/depopts-and-features">original proposal</a>
and the section in the <a href="https://opam.ocaml.org/doc/Manual.html#opam">new manual</a>
</li>
<li>The "filter" language in opam files is now well defined, and documented in the
<a href="https://opam.ocaml.org/doc/Manual.html#Filters">manual</a>. In particular,
undefined variables are consistently handled, as well as conversions between
string and boolean values, with new syntax for converting bools to strings.
</li>
<li>New package flag "verbose" in opam files, that outputs the package's build
script to stdout
</li>
<li>New field <code>libexec:</code> in <code><name>.install</code> files, to install into the package's
lib dir with the execution bit set.
</li>
<li>Compilers can now be defined without source nor build instructions, and the
base packages defined in the <code>packages:</code> field are now resolved and then
locked. In practice, this means that repository maintainers can move the
compiler itself to a package, giving a lot more flexibility.
</li>
</ul>
Cumulus and ocp-memprof, a love story https://ocamlpro.com/blog/2015_03_04_cumulus_and_ocp_memprof_a_love_story2015-03-04T08:12:13Z2015-03-04T08:12:13Z
Çagdas Bozman
In this blog post, we went on the hunt of memory leaks in Cumulus by using our memory profiler: ocp-memprof. Cumulus is a feed aggregator based on Eliom, a framework for programming web sites and client/server web applications, part of the Ocsigen Project. First, run and get the memory snapshots To ...<p>In this blog post, we went on the hunt of memory leaks in Cumulus by using <a href="https://memprof.typerex.org/">our memory profiler: ocp-memprof</a>. Cumulus is a feed aggregator based on <a href="https://ocsigen.org/eliom/">Eliom</a>, a framework for programming web sites and client/server web applications, part of the <a href="https://ocsigen.org/">Ocsigen Project</a>.</p>
<h3>First, run and get the memory snapshots</h3>
<p>To test and run the server, we use <code>ocp-memprof</code> to start the process:</p>
<pre><code class="language-shell-session">$ ocp-memprof -exec ocsigenserver.opt -c ocsigenserver.opt.conf -v
</code></pre>
<p>There are several ways to obtain snapshots:</p>
<ul>
<li>automatically after each GC: there is nothing to do, this is the default behavior
</li>
<li>manually:
<ul>
<li>by sending a SIGUSR1 signal (the default signal can be changed by using <code>--signal SIG</code> option);
</li>
<li>by editing the source code and using the dump function in the <code>Headump</code> module:
</li>
</ul>
<pre><code class="language-ocaml">(* the string argument stands for the name of the dump *)
val dump : string -> unit
</code></pre>
</li>
</ul>
<p>Here, we use the default behavior and get a snapshot after every GC.</p>
<h3>The Memory Evolution Graph</h3>
<p>After running the server for a long time, the server process shows an unusually high consumption of memory. <code>ocp-memprof</code> automatically generates some statistics on the application memory usage. Below, we show the graph of memory consumption. On the x-axis, you can see the number of GCs, and on the y-axis, the memory size in bytes used by the most popular types in memory.</p>
<p><img src="/blog/assets/img/graph_cumulus_evolution_with_leak.png" alt="cumulus evolution with leak" /></p>
<p>Eliom expert users would quickly identify that most of the memory is used by XML nodes and attributes, together with strings and closures.</p>
<p>Unfortunately, it is not that easy to know which parts of Cumulus source code are the cause for the allocations of these XML trees. These trees are indeed abstract types allocated using functions exported by the Eliom modules. The main part of the allocations are then located in the Eliom source code.</p>
<p>Generally, we will have a problem to locate abstract type values just using allocation points. It may be useful to browse the memory graph which can be completely reconstructed from the snapshot to identify all paths between the globals and the blocks representing XML nodes.</p>
<h3>From roots to leaking nodes</h3>
<p><img src="/blog/assets/img/screenshot_cumulus_per_roots_with_leak.png" alt="screenshot_cumulus_per_roots_with_leak" /></p>
<p>The approach that we chose to identify the leak is to take a look at the pointer graph of our application in order to identify the roots retaining a significant portion of the memory. Above, we can observe the table of the retained size, for all roots of the application. What we can tell quickly is that <strong>92.2%</strong> of our memory is retained by values with finalizers.</p>
<p>Below, looking at them more closely, we can state that there is a significant amount of values of type:</p>
<p>[code language="fsharp" gutter="false"]
'a Eliom_comet_base.channel_data Lwt_stream.t -> unit
[/code]</p>
<p><img src="/blog/assets/img/screenshot_cumulus_per_roots_with_leak_zoomed.png" alt="screenshot_cumulus_per_roots_with_leak_zoomed" /></p>
<p>Probably, these finalizers are never called in order to free their associated values. The leak is not trivial to track down and fix. However, a quick fix is possible in the case of Cumulus.</p>
<h3>Identifying the source code and patching it</h3>
<p>After further investigation into the source code of Cumulus, we found the only location where such values are allocated:</p>
<pre><code class="language-ocaml">(* $ROOT/cumulus/src/base/feeds.ml *)
let (event , call_event ) =
let ( private_event , call_event ) = React.E. create () in
let event = Eliom_react .Down. of_react private_event in
(event , call_event )
</code></pre>
<p>The function <code>of_react</code> takes an optional argument <code>~scope</code> to specify the way that <code>Eliom_comet.Channel.create</code> has to use the communication channel.</p>
<p>Changing the default value of the scope by another given in Eliom module, we have now only one channel and every client use this channel to communicate with the server (the default method created one channel by client).</p>
<pre><code class="language-ocaml">(* $ROOT/cumulus/src/base/feeds.ml *)
let (event , call_event ) =
let ( private_event , call_event ) = React.E. create () in
let event = Eliom_react .Down. of_react
~scope : Eliom_common . site_scope private_event in
(event , call_event )let (event , call_event ) =
</code></pre>
<h3>Checking the fix</h3>
<p>After patching the source code, we recompile our application and re-execute the process as before. Below, we can observe the new pointer graph. By changing the default value of <code>scope</code>, the size retained by finalizers drops from <strong>92.2% to 0%</strong> !</p>
<p><img src="/blog/assets/img/screenshot_cumulus_per_roots_fixed.png" alt="screenshot_cumulus_per_roots_fixed" /></p>
<p>The new evolution graph below shows that the memory usage drops from <strong>45Mb (still growing quickly) for a few hundreds connections to 5.2Mb</strong> for thousands connections.</p>
<p><img src="/blog/assets/img/graph_cumulus_evolution_fixed.png" alt="graph_cumulus_evolution_fixed" /></p>
<h3>Conclusion</h3>
<p>As a reminder, a finalisation function is a function that will be called with the (heap-allocated) value to which it is associated when that value becomes unreachable.</p>
<p>The GC calls finalisation functions in order to deallocate their associated values. You need to pay special attention when writing such finalisation functions, since anything reachable from the closure of a finalisation function is considered reachable. You also need to be careful not to make the value, that you want to free, become reachable again.</p>
<p>This example is online in our gallery of examples if you want to see and explore the graphs (<a href="https://memprof.typerex.org/users/04db0c7fb9232a0829e862d5bb2801fb/2015-03-02_16-04-33_7146967976ee57b0a97e053109440846_12249/">with the leak</a> and <a href="https://memprof.typerex.org/users/04db0c7fb9232a0829e862d5bb2801fb/2015-03-02_16-13-14_dd080e47d1bf4d18d3538d37769f325f_14185/">without the leak</a>).</p>
<p>Do not hesitate to use <code>ocp-memprof</code> on your applications. Of course, all feedback and suggestions on using <code>ocp-memprof</code> are welcome, just send us a mail !
More information:</p>
<ul>
<li>Homepage: <a href="https://memprof.typerex.org/">https://memprof.typerex.org/</a>
</li>
<li>Usage: <a href="https://memprof.typerex.org/free-version.php">https://memprof.typerex.org/free-version.php</a>
</li>
<li>Support: <a href="https://memprof.typerex.org/report-a-bug.php">https://memprof.typerex.org/report-a-bug.php</a>
</li>
<li>Gallery of examples: <a href="https://memprof.typerex.org/gallery.php">https://memprof.typerex.org/gallery.php</a>
</li>
<li>Commercial: <a href="https://memprof.typerex.org/commercial-version.php">https://memprof.typerex.org/commercial-version.php</a>
</li>
</ul>
Private Release of Alt-Ergo 1.00https://ocamlpro.com/blog/2015_01_29_private_release_of_alt_ergo_1_002015-01-29T08:12:13Z2015-01-29T08:12:13Z
Mohamed Iguernlala
altergo logo After the public release of Alt-Ergo 0.99.1 last December, it's time to announce a new major private version (1.00) of our SMT solver. As usual: we freely provide a JavaScript version on Alt-Ergo's website
we provide a private access to our internal repositories for academia users and o...<p><img src="/blog/assets/img/logo_alt_ergo.png" alt="altergo logo" /></p>
<p>After the public release of Alt-Ergo 0.99.1 last December, it's time to announce a new major private version (1.00) of our SMT solver. As usual:</p>
<ul>
<li>we freely provide a JavaScript version on Alt-Ergo's website
</li>
<li>we provide a private access to our internal repositories for academia users and our clients.
</li>
</ul>
<h3>Quick Evaluation</h3>
<p>A quick comparison between this new version and the previous releases is given below. Timeout is set to 60 seconds. The benchmark is made of 19044 formulas: (a) some of these formulas are known to be invalid, and (b) some of them are out of scope of current SMT solvers. The results are obtained with Alt-Ergo's native input language.</p>
<table class="tableau" style="font-size: 90%; width: 100%;">
<thead>
<tr>
<td></td>
<th>public release<br />
0.95.2</th>
<th>public release<br />
0.99.1</th>
<th>private release<br />
1.00</th>
</tr>
</thead>
<tbody>
<tr>
<th>Proved Valid</th>
<td>15980</td>
<td>16334</td>
<td>17638</td>
</tr>
<tr>
<th>Proved Valid (%)</th>
<td>84,01 %</td>
<td>85,77 %</td>
<td>92,62 %</td>
</tr>
<tr>
<th>Required time (seconds)</th>
<td>10831</td>
<td>10504</td>
<td>9767</td>
</tr>
<tr>
<th>Average speed<br />
(valid formulas per second)</th>
<td>1,47</td>
<td>1,55</td>
<td>1,81</td>
</tr>
</tbody>
</table>
<h3>Main Novelties of Alt-Ergo 1.00</h3>
<h4>General Improvements</h4>
<ul>
<li>theories data structures: semantic values (internal theories representation of terms) are now hash-consed. This enables the use of hash-based comparison (instead of structural comparison) when possible
</li>
<li>theories combination: the dispatcher component, that sends literals assumed by the SAT solver to different theories depending on whether these literals are equalities, disequalities or inequalities, has been re-implemented. The new code is much more simpler and enables some optimizations and factorizations that could not be made before
</li>
<li>case-split analysis: we made several improvements in the heuristics of the case-split analysis mechanism over finite domains
</li>
<li>explanations propagation: we improved explanations propagation in congruence closure and linear arithmetic algorithms. This makes the proofs faster thanks to a better back-jumping in the SAT solver part
</li>
<li>linear integer arithmetic: we re-implemented several parts of linear arithmetic and introduced important improvements in the Fourier-Motzkin algorithm to make it run on smaller sub-problems and avoid some useless executions. These optimizations allowed a significant speed up on our internal benchmarks
</li>
<li>data structures: we optimized hash-consing and some functions in the "formula" and "literal" modules
</li>
<li>SAT solving: we made a lot of improvements in the default SAT-solver and in the SatML plugin. In particular, the solvers now send lists of facts (literals) to "the decision procedure part" instead of sending them one by one. This avoids intermediate calls to some "expensive" algorithms, such as Fourier-Motzkin
</li>
<li>Matching: we extended the E-matching algorithm to also perform matching modulo the theory of records. In addition, we simplified matching heuristics and optimized the E-matching process to avoid computing the same instances several times
</li>
<li>Memory management: thanks to the ocp-memprof tool (http://memprof.typerex.org/), we identified some parts of Alt-Ergo that needed some improvements in order to avoid useless memory allocations, and thus unburden the OCaml garbage collector
</li>
<li>the function that retrieves the used axioms and predicates (when option 'save-used-context' is activated) has been improved
</li>
</ul>
<h4>Bug Fixes</h4>
<ul>
<li>6 in the "inequalities" module of linear arithmetic
</li>
<li>4 in the "formula" module
</li>
<li>3 in the "ty" module used for types representation and manipulation
</li>
<li>2 in the "theories front-end" module that interacts with the SAT solvers
</li>
<li>1 in the "congruence closure" algorithm
</li>
<li>1 in "existential quantifiers elimination" module
</li>
<li>1 in the "type-checker"
</li>
<li>1 in the "AC theory" of associative and commutative function symbols
</li>
<li>1 in the "union-find" module
</li>
</ul>
<h4>New OCamlPro Plugins</h4>
<ul>
<li>profiling plugin: when activated, this plugin records and prints some information about the current execution of Alt-Ergo every 'x' seconds: In particular, one can observe a module being activated, a function being called, the amount of time spent in every module/function, the current decision/instantiation level, the number of decisions/instantiations that have been made so far, the number of case-splits, of boolean/theory conflicts, of assumptions in the decision procedure, of generated instances per axiom, ….
</li>
<li>fm-simplex plugin: when activated, this plugin is used instead of the Fourier-Motzkin method to infer bounds for linear integer arithmetic affine forms (which are used in the case-split analysis process). This module uses the Simplex algorithm to simulate particular runs of Fourier-Motzkin, which makes it scale better on linear integer arithmetic problems containing a lot of inequalities
</li>
</ul>
<h4>New Options</h4>
<ul>
<li>
<p>version-info: prints some information about this version of Alt-Ergo (release and compilation dates, release commit ID)</p>
</li>
<li>
<p>no-theory: deactivate theory reasoning. In this case, only the SAT-solver and the matching parts are working</p>
</li>
<li>
<p>inequalities-plugin: specify a plugin to use, instead of the "default" Fourier-Motzkin algorithm, to handle inequalities of linear arithmetic</p>
</li>
<li>
<p>tighten-vars: when this option is set, the Fm-Simplex plugin will try to infer bounds for integer variables as well. Note that this option may be very expensive</p>
</li>
<li>
<p>profiling-plugin: specify a profiling plugin to use to monitor an execution of Alt-Ergo</p>
</li>
<li>
<p>profiling <freq>: makes the profiling module prints its information every <freq> seconds</p>
</li>
<li>
<p>no-tcp: deactivate constraints propagation modulo theories</p>
</li>
</ul>
<h4>Removed Capabilities</h4>
<ul>
<li>the pruning module used in the frontend is now removed
</li>
<li>the SMT and SMT2 front-ends are removed. We plan to implement a new front-end for SMT2 in upcoming releases
</li>
</ul>
OPAM 1.2 and Travis CIhttps://ocamlpro.com/blog/2014_12_18_opam_1.2_and_travis_ci2014-12-18T08:12:13Z2014-12-18T08:12:13Z
Thomas Gazagnaire
The new pinning feature of OPAM 1.2 enables new interesting workflows for your day-to-day development in OCaml projects. I will briefly describe one of them here: simplifying continuous testing with Travis CI and GitHub. Creating an opam file As explained in the previous post, adding an opam file at...<p>The <a href="https://opam.ocaml.org/blog/opam-1-2-pin/">new pinning feature</a> of OPAM 1.2 enables new interesting
workflows for your day-to-day development in OCaml projects. I will
briefly describe one of them here: simplifying continuous testing with
<a href="https://travis-ci.org/">Travis CI</a> and
<a href="https://github.com/">GitHub</a>.</p>
<h2>Creating an <code>opam</code> file</h2>
<p>As explained in the <a href="https://opam.ocaml.org/blog/opam-1-2-pin/">previous post</a>, adding an <code>opam</code> file at the
root of your project now lets you pin development versions of your
project directly. It's very easy to create a default template with OPAM 1.2:</p>
<pre><code class="language-shell-session">$ opam pin add <my-project-name> . --edit
[... follow the instructions ...]
</code></pre>
<p>That command should create a fresh <code>opam</code> file; if not, you might
need to fix the warnings in the file by re-running the command. Once
the file is created, you can edit it directly and use <code>opam lint</code> to
check that is is well-formed.</p>
<p>If you want to run tests, you can also mark test-only dependencies with the
<code>{test}</code> constraint, and add a <code>build-test</code> field. For instance, if you use
<code>oasis</code> and <code>ounit</code>, you can use something like:</p>
<pre><code class="language-shell-session">build: [
["./configure" "--prefix=%{prefix}%" "--%{ounit:enable}%-tests"]
[make]
]
build-test: [make "test"]
depends: [
...
"ounit" {test}
...
]
</code></pre>
<p>Without the <code>build-test</code> field, the continuous integration scripts
will just test the compilation of your project for various OCaml
compilers.
OPAM doesn't run tests by default, but you can make it do so by
using <code>opam install -t</code> or setting the <code>OPAMBUILDTEST</code>
environment variable in your local setup.</p>
<h2>Installing the Travis CI scripts</h2>
<p><a href="https://travis-ci.org/">Travis CI</a> is a free service that enables continuous testing on your
GitHub projects. It uses Ubuntu containers and runs the tests for at most 50
minutes per test run.</p>
<p>To use Travis CI with your OCaml project, you can follow the instructions on
<a href="https://github.com/ocaml/ocaml-travisci-skeleton">https://github.com/ocaml/ocaml-travisci-skeleton</a>. Basically, this involves:</p>
<ul>
<li>adding
<a href="https://github.com/ocaml/ocaml-travisci-skeleton/blob/master/.travis.yml">.travis.yml</a>
at the root of your project. You can tweak this file to test your
project with different versions of OCaml. By default, it will use
the latest stable version (today: 4.02.1, but it will be updated for
each new compiler release). For every OCaml version that you want to
test (supported values for <code><VERSION></code> are <code>3.12</code>, <code>4.00</code>,
<code>4.01</code> and <code>4.02</code>) add the line:
</li>
</ul>
<pre><code class="language-shell-session">env:
- OCAML_VERSION=<VERSION>
</code></pre>
<ul>
<li>signing in at <a href="https://travis-ci.org/">TravisCI</a> using your GitHub account and
enabling the tests for your project (click on the <code>+</code> button on the
left pane).
</li>
</ul>
<p>And that's it, your project now has continuous integration, using the OPAM 1.2
pinning feature and Travis CI scripts.</p>
<h2>Testing Optional Dependencies</h2>
<p>By default, the script will not try to install the <a href="https://opam.ocaml.org/doc/manual/dev-manual.html#sec9">optional
dependencies</a> specified in your <code>opam</code> file. To do so, you
need to manually specify which combination of optional dependencies
you want to tests using the <code>DEPOPTS</code> environment variable. For
instance, to test <code>cohttp</code> first with <code>lwt</code>, then with <code>async</code> and
finally with both <code>lwt</code> and <code>async</code> (but only on the <code>4.01</code> compiler)
you should write:</p>
<pre><code class="language-shell-session">env:
- OCAML_VERSION=latest DEPOPTS=lwt
- OCAML_VERSION=latest DEPOPTS=async
- OCAML_VERSION=4.01 DEPOPTS="lwt async"
</code></pre>
<p>As usual, your contributions and feedback on this new feature are <a href="https://github.com/ocaml/ocaml-travisci-skeleton/issues/">gladly welcome</a>.</p>
OPAM 1.2.0 Releasedhttps://ocamlpro.com/blog/2014_10_23_opam_1.2.0_released2014-10-23T08:12:13Z2014-10-23T08:12:13Z
Louis Gesbert
We are very proud to announce the availability of OPAM 1.2.0. Upgrade from 1.1 Simply follow the usual instructions, using your preferred method (package from your distribution, binary, source, etc.) as documented on the homepage. NOTE: There are small changes to the internal repository format (~/.o...<p>We are very proud to announce the availability of OPAM 1.2.0.</p>
<h3>Upgrade from 1.1</h3>
<p>Simply follow the usual instructions, using your preferred method (package from
your distribution, binary, source, etc.) as documented on the
<a href="https://opam.ocaml.org/doc/Install.html">homepage</a>.</p>
<blockquote>
<p><strong>NOTE</strong>: There are small changes to the internal repository format (~/.opam).
It will be transparently updated on first run, but in case you might want to
go back and have anything precious there, you're advised to back it up.</p>
</blockquote>
<h3>Usability</h3>
<p>Lot of work has been put into providing a cleaner interface, with helpful
behaviour and messages in case of errors.</p>
<p>The <a href="https://opam.ocaml.org/doc/">documentation pages</a> also have been largely
rewritten for consistency and clarity.</p>
<h3>New features</h3>
<p>This is just the top of the list:</p>
<ul>
<li>A extended and versatile <code>opam pin</code> command. See the
<a href="../opam-1-2-pin">Simplified packaging workflow</a>
</li>
<li>More expressive queries, see for example <code>opam source</code>
</li>
<li>New metadata fields, including source repositories, bug-trackers, and finer
control of package behaviour
</li>
<li>An <code>opam lint</code> command to check the quality of packages
</li>
</ul>
<p>For more detail, see <a href="../opam-1-2-0-beta4">the announcement for the beta</a>,
<a href="https://raw.githubusercontent.com/ocaml/opam/1.2.0/CHANGES">the full changelog</a>,
and <a href="https://github.com/ocaml/opam/issues?q=label%3A%22Feature+Wish%22+milestone%3A1.2+is%3Aclosed">the bug-tracker</a>.</p>
<h3>Package format</h3>
<p>The package format has been extended to the benefit of both packagers and users.
The repository already accepts packages in the 1.2 format, and this won't
affect 1.1 users as a rewrite is done on the server for compatibility with 1.1.</p>
<p>If you are hosting a repository, you may be interested in these
<a href="https://github.com/ocaml/opam/tree/master/admin-scripts">administration scripts</a>
to quickly take advantage of the new features or retain compatibility.</p>
OPAM 1.2: Repository Pinninghttps://ocamlpro.com/blog/2014_08_19_opam_1.2_repository_pinning2014-08-19T08:12:13Z2014-08-19T08:12:13Z
Louis Gesbert
Most package managers support some pin functionality to ensure that a given package remains at a particular version without being upgraded. The stable OPAM 1.1 already supported this by allowing any existing package to be pinned to a target, which could be a specific released version, a local filesy...<p><img src="/blog/assets/img/picture_camel_pin.jpg"></img></p>
<p>Most package managers support some <em>pin</em> functionality to ensure that a given
package remains at a particular version without being upgraded.
The stable OPAM 1.1 already supported this by allowing any existing package to be
pinned to a <em>target</em>, which could be a specific released version, a local filesystem
path, or a remote version-controlled repository.</p>
<p>However, the OPAM 1.1 pinning workflow only lets you pin packages that <em>already exist</em> in your OPAM
repositories. To declare a new package, you had to go through creating a
local repository, registering it in OPAM, and adding your package definition there.
That workflow, while reasonably clear, required the user to know about the repository
format and the configuration of an internal repository in OPAM before actually getting to
writing a package. Besides, you were on your own for writing the package
definition, and the edit-test loop wasn't as friendly as it could have been.</p>
<p>A natural, simpler workflow emerged from allowing users to <em>pin</em> new package
names that don't yet exist in an OPAM repository:</p>
<ol>
<li>choose a name for your new package
</li>
<li><code>opam pin add</code> in the development source tree
</li>
<li>the package is created on-the-fly and registered locally.
</li>
</ol>
<p>To make it even easier, OPAM can now interactively help you write the
package definition, and you can test your updates with a single command.
This blog post explains this new OPAM 1.2 functionality in more detail;
you may also want to check out the new <a href="https://opam.ocaml.org/doc/1.2/Packaging.html" title="OPAM 1.2 doc preview, packaging guide">Packaging tutorial</a>
relying on this workflow.</p>
<h3>From source to package</h3>
<p>For illustration purposes in this post I'll use a tiny tool that I wrote some time ago and
never released: <a href="https://github.com/OCamlPro/ocp-reloc" title="ocp-reloc repo on Github">ocp-reloc</a>. It's a simple binary that fixes up the
headers of OCaml bytecode files to make them relocatable, which I'd like
to release into the public OPAM repository.</p>
<h4>"opam pin add"</h4>
<p>The command <code>opam pin add <name> <target></code> pins package <code><name></code> to
<code><target></code>. We're interested in pinning the <code>ocp-reloc</code> package
name to the project's source directory.</p>
<pre><code>cd ocp-reloc
opam pin add ocp-reloc .
</code></pre>
<p>If <code>ocp-reloc</code> were an existing package, the metadata would be fetched from
the package description in the OPAM repositories. Since the package doesn't yet exist,
OPAM 1.2 will instead prompt for on-the-fly creation:</p>
<pre><code>Package ocp-reloc does not exist, create as a NEW package ? [Y/n] y
ocp-reloc is now path-pinned to ~/src/ocp-reloc
</code></pre>
<blockquote>
<p>NOTE: if you are using <strong>beta4</strong>, you may get a <em>version-control</em>-pin instead,
because we added auto-detection of version-controlled repos. This turned out to
be confusing (<a href="https://github.com/ocaml/opam/issues/1582">issue #1582</a>),
because your changes wouldn't be reflected until you commit, so
this has been reverted in favor of a warning. Add the <code>--kind path</code> option to
make sure that you get a <em>path</em>-pin.</p>
</blockquote>
<h4>OPAM Package Template</h4>
<p>Now your package still needs some kind of definition for OPAM to acknowledge it;
that's where templates kick in, the above triggering an editor with a pre-filled
<code>opam</code> file that you just have to complete. This not only saves time in
looking up the documentation, it also helps getting consistent package
definitions, reduces errors, and promotes filling in optional but recommended
fields (homepage, etc.).</p>
<pre><code class="language-shell-session">opam-version: "1.2"
name: "ocp-reloc"
version: "0.1"
maintainer: "Louis Gesbert <louis.gesbert@ocamlpro.com>"
authors: "Louis Gesbert <louis.gesbert@ocamlpro.com>"
homepage: ""
bug-reports: ""
license: ""
build: [
["./configure" "--prefix=%{prefix}%"]
[make]
]
install: [make "install"]
remove: ["ocamlfind" "remove" "ocp-reloc"]
depends: "ocamlfind" {build}
</code></pre>
<p>After adding some details (most importantly the dependencies and
build instructions), I can just save and exit. Much like other system tools
such as <code>visudo</code>, it checks for syntax errors immediately:</p>
<pre><code>[ERROR] File "/home/lg/.opam/4.01.0/overlay/ocp-reloc/opam", line 13, character 35-36: '.' is not a valid token.
Errors in /home/lg/.opam/4.01.0/overlay/ocp-reloc/opam, retry editing ? [Y/n]
</code></pre>
<h4>Installation</h4>
<p>You probably want to try your brand new package right away, so
OPAM's default action is to try and install it (unless you specified <code>-n</code>):</p>
<pre><code class="language-shell-session">ocp-reloc needs to be installed.
The following actions will be performed:
- install cmdliner.0.9.5 [required by ocp-reloc]
- install ocp-reloc.0.1*
=== 1 to install ===
Do you want to continue ? [Y/n]
</code></pre>
<p>I usually don't get it working the first time around, but <code>opam pin edit ocp-reloc</code> and <code>opam install ocp-reloc -v</code> can be used to edit and retry until
it does.</p>
<h4>Package Updates</h4>
<p>How do you keep working on your project as you edit the source code, now that
you are installing through OPAM? This is as simple as:</p>
<pre><code>opam upgrade ocp-reloc
</code></pre>
<p>This will pick up changes from your source repository and reinstall any packages
that are dependent on <code>ocp-reloc</code> as well, if any.</p>
<p>So far, we've been dealing with the metadata locally used by your OPAM
installation, but you'll probably want to share this among developers of your
project even if you're not releasing anything yet. OPAM takes care of this
by prompting you to save the <code>opam</code> file back to your source tree, where
you can commit it directly into your code repository.</p>
<pre><code class="language-shell-session">cd ocp-reloc
git add opam
git commit -m 'Add OPAM metadata'
git push
</code></pre>
<h3>Publishing your New Package</h3>
<p>The above information is sufficient to use OPAM locally to integrate new code
into an OPAM installation. Let's look at how other developers can share this
metadata.</p>
<h4>Picking up your development package</h4>
<p>If another developer wants to pick up <code>ocp-reloc</code>, they can directly use
your existing metadata by cloning a copy of your repository and issuing their
own pin.</p>
<pre><code class="language-shell-session">git clone git://github.com/OCamlPro/ocp-reloc.git
opam pin add ocp-reloc/
</code></pre>
<p>Even specifying the package name is optional since this is documented in
<code>ocp-reloc/opam</code>. They can start hacking, and if needed use <code>opam pin edit</code> to
amend the opam file too. No need for a repository, no need to share anything more than a
versioned <code>opam</code> file within your project.</p>
<h4>Cloning already existing packages</h4>
<p>We have been focusing on an unreleased package, but the same
functionality is also of great help in handling existing packages, whether you
need to quickly hack into them or are just curious. Let's consider how to
modify the <a href="https://github.com/ocaml/omd" title="OMD page on Github"><code>omd</code> Markdown library</a>.</p>
<pre><code class="language-shell-session">opam source omd --pin
cd omd.0.9.7
...patch...
opam upgrade omd
</code></pre>
<p>The new <code>opam source</code> command will clone the source code of the library you
specify, and the <code>--pin</code> option will also pin it locally to ensure it is used
in preference to all other versions. This will also take care of recompiling
any installed packages that are dependent on <code>omd</code> using your patched version
so that you notice any issues right away.</p>
<blockquote>
<p>There's a new OPAM field available in 1.2 called <code>dev-repo</code>. If you specify
this in your metadata, you can directly pin to the upstream repository via
<code>opam source --dev-repo --pin</code>.</p>
</blockquote>
<p>If the upstream repository for the package contains an <code>opam</code> file, that file will be picked up
in preference to the one from the OPAM repository as soon as you pin the package.
The idea is to have:</p>
<ul>
<li>a <em>development</em> <code>opam</code> file that is versioned along with your source code
(and thus accurately tracks the latest dependencies for your package).
</li>
<li>a <em>release</em> <code>opam</code> file that is published on the OPAM repository and can
be updated independently without making a new release of the source code.
</li>
</ul>
<p>How to get from the former to the latter will be the subject of another post!
In the meantime, all users of the <a href="../opam-1-2-0-beta4" title="OPAM 1.2.0 beta4 announcement">beta</a> are welcome to share their
experience and thoughts on the new workflow on the <a href="https://github.com/ocaml/opam/issues" title="OPAM bug-tracker on Github">bug tracker</a>.</p>
OPAM 1.2.0 public beta releasedhttps://ocamlpro.com/blog/2014_08_14_opam_1.2.0_public_beta_released2014-08-14T08:12:13Z2014-08-14T08:12:13Z
OCaml Platform Team
It has only been 18 months since the first release of OPAM, but it is already difficult to remember a time when we did OCaml development without it. OPAM has helped bring together much of the open-source code in the OCaml community under a single umbrella, making it easier to discover, depend on, a...<p>It has only been 18 months since the first release of OPAM, but it is already
difficult to remember a time when we did OCaml development without it. OPAM
has helped bring together much of the open-source code in the OCaml community
under a single umbrella, making it easier to discover, depend on, and maintain
OCaml applications and libraries. We have seen steady growth in the number
of new packages, updates to existing code, and a diverse group of contributors.
<img src="/blog/assets/img/graph_opam1.2_packages.png"/></p>
<p>OPAM has turned out to be more than just another package manager. It is also
increasingly central to the demanding workflow of industrial OCaml development,
since it supports multiple simultaneous (patched) compiler installations,
sophisticated package version constraints that ensure statically-typed code can
be recompiled without conflict, and a distributed workflow that integrates
seamlessly with Git, Mercurial or Darcs version control. OPAM tracks multiple
revisions of a single package, thereby letting packages rely on older
interfaces if they need to for long-term support. It also supports multiple
package repositories, letting users blend the global stable package set with
their internal revisions, or building completely isolated package universes for
closed-source products.</p>
<p>Since its initial release, we have been learning from the extensive feedback
from our users about how they use these features as part of their day-to-day
workflows. Larger projects like <a href="http://wiki.xen.org/wiki/XAPI">XenAPI</a>, the <a href="http://ocsigen.org">Ocsigen</a> web suite,
and the <a href="http://openmirage.org">Mirage OS</a> publish OPAM <a href="https://opam.ocaml.org/doc/Advanced_Usage.html#Handlingofrepositories">remotes</a> that build
their particular software suites.
Complex applications such as the <a href="https://github.com/facebook/pfff/wiki/Main">Pfff</a> static analysis tool and <a href="https://code.facebook.com/posts/264544830379293/hack-a-new-programming-language-for-hhvm/">Hack</a>
language from Facebook, the <a href="https://github.com/frenetic-lang/frenetic">Frenetic</a> SDN language and the <a href="http://arakoon.org">Arakoon</a>
distributed key store have all appeared alongside these libraries.
<a href="https://www.janestreet.com">Jane Street</a> pushes regular releases of their
production <a href="http://janestreet.github.io/">Core/Async</a> suite every couple
of weeks.</p>
<p>One pleasant side-effect of the growing package database has been the
contribution of tools from the community that make the day-to-day use of OCaml
easier. These include the <a href="https://github.com/diml/utop">utop</a> interactive toplevel, the <a href="https://github.com/andrewray/iocaml">IOCaml</a>
browser notebook, and the <a href="https://github.com/the-lambda-church/merlin">Merlin</a> IDE extension. While these tools are an
essential first step, there's still some distance to go to make the OCaml
development experience feel fully integrated and polished.</p>
<p>Today, we are kicking off the next phase of evolution of OPAM and starting the
journey towards building an <em>OCaml Platform</em> that combines the OCaml compiler
toolchain with a coherent workflow for build, documentation, testing and IDE
integration. As always with OPAM, this effort has been a collaborative effort,
coordinated by the <a href="https://www.cl.cam.ac.uk/projects/ocamllabs/">OCaml Labs</a> group in Cambridge and
<a href="/">OCamlPro</a> in France.
The OCaml Platform builds heavily on OPAM, since it forms the substrate that
pulls together the tools and facilitates a consistent development workflow.
We've therefore created this blog on <a href="https://opam.ocaml.org">opam.ocaml.org</a> to chart its progress,
announce major milestones, and eventually become a community repository of all
significant activity.</p>
<p>Major points:</p>
<ul>
<li>
<p><strong>OPAM 1.2 beta available</strong>:
Firstly, we're announcing <strong>the availability of the OPAM 1.2 beta</strong>,
which includes a number of new features, hundreds of bug fixes, and pretty
new colours in the CLI. We really need your feedback to ensure a polished
release, so please do read the release notes below.</p>
</li>
<li>
<p>In the coming weeks, we will provide an overview of what the OCaml Platform is
(and is not), and describe an example workflow that the Platform can enable.</p>
</li>
<li>
<p><strong>Feedback</strong>: If you have questions or comments as you read these posts,
then please do join the <a href="https://lists.ocaml.org/listinfo/platform">platform@lists.ocaml.org</a> and make
them known to us.</p>
</li>
</ul>
<h2>Releasing the OPAM 1.2 beta4</h2>
<p>We are proud to announce the latest beta of OPAM 1.2. It comes packed with
<a href="https://github.com/ocaml/opam/issues?q=label%3A%22Feature+Wish%22+milestone%3A1.2+is%3Aclosed" title="Features added in 1.2 from the tracker on Github">new features</a>, stability and usability improvements. Here the
highlights.</p>
<h3>Binary RPMs and DEBs!</h3>
<p>We now have binary packages available for Fedora 19/20, CentOS 6/7, RHEL7,
Debian Wheezy and Ubuntu! You can see the full set at the <a href="https://build.opensuse.org/package/show/home:ocaml/opam#">OpenSUSE Builder</a> site and
<a href="http://software.opensuse.org/download.html?project=home:ocaml&package=opam">download instructions</a> for your particular platform.</p>
<p>An OPAM binary installation doesn't need OCaml to be installed on the system, so you
can initialize a fresh, modern version of OCaml on older systems without needing it
to be packaged there.
On CentOS 6 for example:</p>
<pre><code class="language-shell-session">cd /etc/yum.repos.d/
wget http://download.opensuse.org/repositories/home:ocaml/CentOS_6/home:ocaml.repo
yum install opam
opam init --comp=4.01.0
</code></pre>
<h3>Simpler user workflow</h3>
<p>For this version, we focused on improving the user interface and workflow. OPAM
is a complex piece of software that needs to handle complex development
situations. This implies things might go wrong, which is precisely when good
support and error messages are essential. OPAM 1.2 has much improved stability
and error handling: fewer errors and more helpful messages plus better state backups
when they happen.</p>
<p>In particular, a clear and meaningful explanation is extracted from the solver
whenever you are attempting an impossible action (unavailable package,
conflicts, etc.):</p>
<pre><code class="language-shell-session">$ opam install mirage-www=0.3.0
The following dependencies couldn't be met:
- mirage-www -> cstruct < 0.6.0
- mirage-www -> mirage-fs >= 0.4.0 -> cstruct >= 0.6.0
Your request can't be satisfied:
- Conflicting version constraints for cstruct
</code></pre>
<p>This sets OPAM ahead of many other package managers in terms of
user-friendliness. Since this is made possible using the tools from
<a href="https://www.irill.org" title="IRILL">irill</a> (which are also used for <a href="https://qa.debian.org/dose/debcheck/testing_main/" title="Debian Weather Service">Debian</a>), we hope that
this work will find its way into other package managers.
The extra analyses in the package solver interface are used to improve the
health of the central package repository, via the <a href="http://ows.irill.org" title="The OPAM Weather Service">OPAM Weather service</a>.</p>
<p>And in case stuff does go wrong, we added the <code>opam upgrade --fixup</code>
command that will get you back to the closest clean state.</p>
<p>The command-line interface is also more detailed and convenient, polishing and
documenting the rough areas. Just run <code>opam <subcommand> --help</code> to see the
manual page for the below features.</p>
<ul>
<li>More expressive queries based on dependencies.
</li>
</ul>
<pre><code class="language-shell-session">$ opam list --depends-on cow --rec
# Available packages recursively depending on cow.0.10.0 for 4.01.0:
cowabloga 0.0.7 Simple static blogging support.
iocaml 0.4.4 A webserver for iocaml-kernel and iocamljs-kernel.
mirage-www 1.2.0 Mirage website (written in Mirage)
opam2web 1.3.1 (pinned) A tool to generate a website from an OPAM repository
opium 0.9.1 Sinatra like web toolkit based on Async + Cohttp
stone 0.3.2 Simple static website generator, useful for a portfolio or documentation pages
</code></pre>
<ul>
<li>Check on existing <code>opam</code> files to base new packages from.
</li>
</ul>
<pre><code class="language-shell-session">$ opam show cow --raw
opam-version: "1"
name: "cow"
version: "0.10.0"
[...]
</code></pre>
<ul>
<li>Clone the source code for any OPAM package to modify or browse the interfaces.
</li>
</ul>
<pre><code class="language-shell-session">$ opam source cow
Downloading archive of cow.0.10.0...
[...]
$ cd cow.0.10.0
</code></pre>
<p>We've also improved the general speed of the tool to cope with the much bigger
size of the central repository, which will be of importance for people building
on low-power ARM machines, and added a mechanism that will let you install
newer releases of OPAM directly from OPAM if you choose so.</p>
<h3>Yet more control for the packagers</h3>
<p>Packaging new libraries has been made as straight-forward as possible.
Here is a quick overview, you may also want to check the <a href="/blog/2014_08_19_opam_1.2_repository_pinning" title="Blog post on OPAM Pin">OPAM 1.2 pinning</a> post.</p>
<pre><code class="language-shell-session">opam pin add <name> <sourcedir>
</code></pre>
<p>will generate a new package on the fly by detecting the presence of an <code>opam</code>
file within the source repository itself. We'll do a followup post next week
with more details of this extended <code>opam pin</code> workflow.</p>
<p>The package description format has also been extended with some new fields:</p>
<ul>
<li><code>bug-reports:</code> and <code>dev-repo:</code> add useful URLs
</li>
<li><code>install:</code> allows build and install commands to be split,
</li>
<li><code>flags:</code> is an entry point for several extensions that can affect your package.
</li>
</ul>
<p>Packagers can limit dependencies in scope by adding one
of the keywords <code>build</code>, <code>test</code> or <code>doc</code> in front of their constraints:</p>
<pre><code class="language-shell-session">depends: [
"ocamlfind" {build & >= 1.4.0}
"ounit" {test}
]
</code></pre>
<p>Here you don't specifically require <code>ocamlfind</code> at runtime, so changing it
won't trigger recompilation of your package. <code>ounit</code> is marked as only required
for the package's <code>build-test:</code> target, <em>i.e.</em> when installing with
<code>opam install -t</code>. This will reduce the amount of (re)compilation required
in day-to-day use.</p>
<p>We've also made optional dependencies more consistent by <em>removing</em> version
constraints from the <code>depopts:</code> field: their meaning was <a href="https://github.com/ocaml/opam/issues/200">unclear</a> and confusing.
The <code>conflicts</code> field is used to indicate versions of the optional dependencies
that are incompatible with your package to remove all ambiguity:</p>
<pre><code class="language-shell-session">depopts: [ "async" {>= "109.15.00"} & "async_ssl" {>= "111.06.00"} ]
</code></pre>
<p>becomes:</p>
<pre><code class="language-shell-session">depopts: [ "async" "async_ssl" ]
conflicts: [ "async" {< "109.15.00"}
"async_ssl" {< "111.06.00"} ]
</code></pre>
<p>There is an <a href="https://github.com/ocaml/opam/pull/1325" title="PR for preliminary 'features' feature on Github">upcoming <code>features</code> field</a> that will give more
flexibility in a clearer and consistent way for such complex cases.</p>
<h3>Easier to package and install</h3>
<p>Efforts were made on the build of OPAM itself as well to make it as easy as possible
to compile, bootstrap or install. There is no more dependency on camlp4 (which has
been moved out of the core distribution in OCaml 4.02.0), and the build process
is more conventional (get the source, run <code>./configure</code>, <code>make lib-ext</code> to get the few
internal dependencies, <code>make</code> and <code>make install</code>). Packagers can use <code>make cold</code>
to build OPAM with a locally compiled version of OCaml (useful for platforms where
it isn't packaged), and also use <code>make download-ext</code> to store all the external archives
within the source tree (for automated builds which forbid external net access).</p>
<p>The <a href="https://opam.ocaml.org/doc" title="Preview of documentation for OPAM 1.2">whole documentation</a> has been rewritten as well, to be better focused and
easier to browse. Please leave any feedback or changes on the documentation on the
<a href="https://github.com/ocaml/opam/issues">issue tracker</a>.</p>
<h3>Try it out !</h3>
<p>The <a href="https://github.com/ocaml/opam/releases/tag/1.2.0-beta4" title="Opam 1.2-beta4 release">public beta of OPAM 1.2</a> is just out. You're welcome to give it a try and
give us feedback before we roll out the release!</p>
<p>We'd be most interested on feedback on how easily you can work with the new
pinning features, on how the new metadata works for you... and on any errors you
may trigger that aren't followed by informative messages or clean behaviour.</p>
<p>If you are hosting a repository, the <a href="https://github.com/ocaml/opam/tree/master/admin-scripts" title="Opam admin scripts directory on Github">administration scripts</a> may help you quickly update all your packages to
benefit from the new features.</p>
OCamlPro Highlights: May-June 2014https://ocamlpro.com/blog/2014_07_16_ocamlpro_highlights_may_june_20142014-07-16T08:12:13Z2014-07-16T08:12:13Z
Çagdas Bozman
Here is a short report on some of our public activities in May and June 2014. Towards OPAM 1.2 After a lot of discussions and work on OPAM itself, we are now getting to a clear workflow for OCaml developpers and packagers: the preliminary document for OPAM 1.2 is available here. The idea is that you...<p>Here is a short report on some of our public activities in May and June 2014.</p>
<h3>Towards OPAM 1.2</h3>
<p>After a lot of discussions and work on OPAM itself, we are now getting to a clear workflow for OCaml developpers and packagers: the preliminary document for OPAM 1.2 is available <a href="https://github.com/AltGr/opam-wiki/blob/1.2/Packaging.md">here</a>. The idea is that you can now easily create and test the metadata locally, before having to get your package included in any repo: there is less administrative burden and it's much quicker to start, fix it, test it and get it right.</p>
<p>Things getting pretty stable, we are closing the last few bugs and should be releasing 1.2~beta very shortly.</p>
<h3>OCaml Hacking Session</h3>
<p>We participated in the first OCaml hacking session in Paris organized by Thomas Braibant and supervised by Gabriel Scherer, who had kindly prepared in advance a selection of tasks. In particular, he came with a list of open bugs in Mantis that makes for good first descents in the compiler's code.</p>
<p>It was the first event of this kind for the OCaml Users in Paris (<a href="https://www.meetup.com/ocaml-paris/">OUPS</a>) meetup group. It was a success since everybody enjoyed it and some work has actually been achieved. We'll have to wait for the next one to confirm that !</p>
<p>On our front, Fabrice started working (with others) on a good, consensual Emacs profile; Pierre worked on building cross-compilers using Makefile templates; Benjamin wanted to evaluate the feasibility of handling ppx extension nodes correctly inside Emacs, and it turns out that elisp tools exist for the task! You can see a first experiment running in the following screen capture, or even <a href="https://files.ocamlpro.com/files/tuareg-mode-with-ppx.el">try the code</a> (just open it in emacs, do a <code>M-x eval-buffer</code> on it and then a <code>M-x tuareg-mode-with-ppx</code> on an OCaml file). But beware, it's not yet very mainstream and can make your Emacs crash.</p>
<p><img src="/blog/assets/img/screenshot_polymode_tuareg.png" alt="polymode-tuareg.png" /></p>
<h3>Alt-Ergo Development</h3>
<p>During the last two months, we participated in the supervision of an intern, Albin Coquereau - a graduted student at University Paris-Sud - in the VALS team who worked on a conservative extension of the <a href="https://smtlib.cs.uiowa.edu/language.shtml">SMT2 standard input language</a> with prenex polymorphism a la ML and overloading. First results are promising. In the future, we plan to replace Alt-Ergo's input language with our extension of SMT2 in order to get advantage from SMT2's features and <a href="https://www.lri.fr/%7Econchon/publis/conchon-smt08.pdf">polymorphism's expressiveness</a>.</p>
<p>Recenlty, we have also published an <a href="/blog/2014_07_15_try_alt_ergo_in_your_browser/">online Javascript-based version of Alt-Ergo</a> (based on private release 0.99).</p>
<h3>OCaml Adventures in Scilab Land</h3>
<p>We are currently working on the proper integration of our Scilab tools in the Scilab world, respecting its ways and conventions. For this, we built a Scilab module respecting the standard ATOMS interface. This module can embed an OCaml program inside the run-time environment of Scilab, so that OCaml functions can be called as external primitives. (Dyn)linking Scilab's various components, LLVM's and the OCaml run-time together was not that easy.</p>
<p>Symmetrically, we built an OCaml library to manipulate and build Scilab values from OCaml, so that our tools can introspect the dynamic envrionment of Scilab's interprete. We also worked with the Scilab team to defined an AST interchange mechanism.</p>
<p>We plan to use this module as an entry point for our JIT oriented type system, as well as to integrate Scilint, our style checker, so that a Scilab user can check their functions without leaving the Scilab toplevel.</p>
<h3>Experiment with Bytes and Backward Compatibility</h3>
<p>As announced by a long discussion in the caml-list, OCaml 4.02 introduces the first step to eliminate a long known OCaml problem: String Mutability. The main difficulty being that resolving that problem necessarilly breaks backward compatibility.</p>
<p>To achieve this first step, OCaml 4.02 comes with a new <code>bytes</code> type and a corresponding <code>Bytes</code> module, similar to OCaml 4.01 <code>String</code> module, but using the <code>bytes</code> type. The type of a few functions from the <code>String</code> module changed to use the <code>bytes</code> type (like <code>String.set</code>, <code>String.blit</code>... ). By default the <code>string</code> and <code>bytes</code> types are equal, hence ensuring backward compatibility for this release, but a new argument "<code>-safe-string</code>" to the compiler can be used to remove this equality, and will probably become the default in some future release.</p>
<pre><code class="language-ocaml"># let s = "foo";;
val s : string = "foo"
# s.[0] <- 'a';;
Characters 0-1:
s.[0] <- 'a';;
^
Error: This expression has type string but an expression was expected of type bytes
</code></pre>
<p>Notice that even when using <code>-safe-string</code> you shouldn't rely on strings being immutable. For instance even if you compile that file with <code>-safe-string</code>, the assertion in the function <code>g</code> does not necessarilly hold:</p>
<p>If the following file <code>a.ml</code> is compiled with <code>-safe-string</code></p>
<pre><code class="language-ocaml">let s = "foo"
let f () = s
let g () = assert(s = "foo")
</code></pre>
<p>and the following file <code>b.ml</code> is compiled without -safe-string</p>
<pre><code class="language-ocaml">let s = A.f () in
s.[0] <- 'a';
A.g ()
</code></pre>
<p>In <code>b.ml</code> the equality holds, so modifying the string is possible, and the assertion from <code>A.g</code> will fail.</p>
<p>So you should consider that for now <code>-safe-string</code> is only a compiler-enforced good practice. But this may (and should) change in future versions. The <code>ocamlc</code> man page says:</p>
<pre><code class="language-shell-session">-safe-string
Enforce the separation between types string and bytes, thereby
making strings read-only. This will become the default in a
future version of OCaml.
</code></pre>
<p>In other words if you want your current code to be forward-compatible, your code should start using <code>bytes</code> instead of <code>string</code> as soon as possible.</p>
<h4>Maintaining Compatibility between 4.01 and 4.02</h4>
<p>In our experiments, we found a convenient solution to start using the <code>bytes</code> type while still providing compatibility with 4.01: we use a small <code>StringCompat</code> module that we open at the beginning of all our files making use of <code>bytes</code>. Depending on the version of OCaml used to build the program, we provide two different implementations of <code>stringCompat.ml</code>.</p>
<ul>
<li>Before 4.02, our <code>stringCompat.ml</code> file provides a <code>bytes</code> type and a <code>Bytes</code> module, including the <code>String</code> module plus an often used <code>Bytes.to_string</code> equivalent<code>:</code>
</li>
</ul>
<pre><code class="language-ocaml">type bytes = string
module Bytes = struct
include String
let to_string t = t
end
</code></pre>
<ul>
<li>After 4.02, our <code>stringCompat.ml</code> file is much simpler:
</li>
</ul>
<pre><code class="language-ocaml">type t = bytes
type bytes = t
module Bytes = Bytes
</code></pre>
<p>You might actually even wonder why it is not empty ? In fact, it is also a good practice to compile with a warning for unused <code>open</code>, and an empty <code>stringCompat.ml</code> would always trigger a warning in 4.02 for not being useful. Instead, this simple implementation is always seen as useful, as any use of <code>bytes</code> or <code>Bytes</code> will use the (virtual) indirection through <code>StringCompat</code>.</p>
<p>We plan to upload this module as a <code>string-compat</code> package in OPAM, so that everybody can use this trick. If you have a better solution, we'll be pleased to discuss it via the pull on <code>opam-repository</code>.</p>
<h4>Testing whether your project correctly builds with "-safe-string"</h4>
<p>When your code have been adapted to use the bytes whenever you need to modify a string, you can test if you didn't miss a case using OCaml 4.02 without changing your build system. To do that, you can just set the environment variable <code>OCAMLPARAM</code> to <code>"safe-string=1,_"</code>. Notice that "<code>OCAMLPARAM</code>" should only be used for testing purpose, never set it in your build system or tools, this would prevent testing new compiler options on your package (and you will receive complaints when the core developers can't desactivate the <code>"-w A -warn-error"</code> generated by your build system) !</p>
<p>If your project passes this test and you don't use <code>"-warn-error"</code>, your package should continue to build without modification in the near and the not-so-near future (unless you link with the compiler internals of course).</p>
Try Alt-Ergo in Your Browserhttps://ocamlpro.com/blog/2014_07_15_try_alt_ergo_in_your_browser2014-07-15T08:12:13Z2014-07-15T08:12:13Z
Mohamed Iguernlala
Recently, we worked on an online Javascript-based serverless version of the Alt-Ergo SMT solver. In what follows, we will explain the principle of this version of Alt-Ergo, show how it can be used on a realistic example and compare its performances with bytecode and native binaries of Alt-Ergo. Comp...<p>Recently, we worked on an <a href="https://alt-ergo.ocamlpro.com/try.php">online Javascript-based serverless version of the Alt-Ergo SMT solver</a>. In what follows, we will explain the principle of this version of Alt-Ergo, show how it can be used on a realistic example and compare its performances with bytecode and native binaries of Alt-Ergo.</p>
<h3>Compilation</h3>
<p>"Try Alt-Ergo" is a Javascript-based version of Alt-Ergo that can be run on compatible browsers (eg. Firefox, Chromium) without requiring a server for computations. It is obtained by compiling the bytecode executable of the solver into Javascript thanks to <a href="https://ocsigen.org/js_of_ocaml/">js_of_ocaml</a>. The <code>.js</code> file is generated following the scheme given below. Roughly speaking, it consists of three steps:</p>
<ul>
<li>A new frontend (<code>main_js.ml</code>) is added to the sources of Alt-Ergo. This file contains some glue code that allows the generated <code>.js</code> file to interact with an HTML file (insertion of buttons, modification of DIV contents, ...)
</li>
<li>The sources of Alt-Ergo and <code>main_js.ml</code> are compiled with <code>ocamlc</code>. The compilation make use of a preprocessor provided by <code>js_of_ocaml</code>.
</li>
<li>The <code>js_of_ocaml</code> compiler is used to transform the bytecode generated by <code>ocaml</code> into a javascript file.
</li>
</ul>
<p><img src="/blog/assets/img/diagram_try_alt_ergo_compilation.png" alt="try-alt-ergo-compilation" /></p>
<p>The <code>.js</code> file is then plugged into an HTML file that fits with the glue code inserted in <code>main_js.ml</code>.</p>
<h3>General overview of the HTML interface</h3>
<p>The HTML interface is made of four panels:</p>
<ul>
<li>The left panel is an editable textarea in which you can write/past/load the formula you want to prove.
</li>
<li>The bottom-left panel is used to display the answer of Alt-Ergo.
</li>
<li>The middle panel contains a set of buttons that allow to interact with both the interface and the javascript version of Alt-Ergo.
</li>
<li>The right panel is used to display different views. The default view ("Options") allows to control the options of Alt-Ergo. When a formula is proved valid, one may switch to "Statistics" view, thanks to the corresponding button in the middle, to see what are the quantified axioms and predicates that are used/instantiated during the proof. The "Debug" view shows the information received by <code>main_js.ml</code> from the HTML interface. The "Examples" view shows some basic examples in Alt-Ergo's native input language that can be loaded in the left panel by a simple click.
</li>
</ul>
<p><img src="/blog/assets/img/screenshot_try_alt_ergo_interface.png" alt="try-alt-ergo-interface" /></p>
<h3>A step-by-step example</h3>
<p>Let us see how "Try Alt-Ergo" works on a formula translated from Atelier-B in the context of the BWare project:</p>
<ul>
<li>First, open <a href="https://alt-ergo.ocamlpro.com/try.php">Try Alt-Ergo</a> in a new tab/window.
</li>
<li>Download the formula <a href="https://www.ocamlpro.com/files/try-alt-ergo.why">try-alt-ergo.why</a>. This formula contains 177 quantified axioms and 132 predicates.
</li>
<li>Click on the "Load a Local File" button of Try alt-ergo's interface and load the example into the left panel.
</li>
<li>Go to "Options" panel and set the <code>maximum number of steps</code> to 1000, the <code>maximum number of triggers</code> to 1, and deactivate <code>E-matching</code>
</li>
<li>Click on "Ask Alt-Ergo" button and wait approximately 60 seconds (depending on your computer). On my laptop, Alt-Ergo given the following answer after, approximately, 40 seconds.
</li>
</ul>
<pre><code class="language-shell-session"># Alt-Ergo's answer: Valid (37.2260 seconds) (222 steps)
</code></pre>
<ul>
<li>Now, you can navigate into the "Statistics" panel to see the quantified axioms and predicates that are instantiated during the proof, those that are potentially used, and those that have never been instantiated.
</li>
</ul>
<h3>Limitations</h3>
<ul>
<li>The Javascript version is slower than native and bytecode versions. In fact, bytecode executable is 4 times faster and native executable is 42 times faster than "Try Alt-Ergo", as shown below.
</li>
</ul>
<pre><code class="language-shell-session">./alt-ergo.byte -nb-triggers 1 -no-Ematching -max-split infinity -save-used-context try-alt-ergo.why
File "/home/mi/Bureau/po.why", line 3017, characters 1-2450:Valid (9.3105) (222)
./alt-ergo.opt -nb-triggers 1 -no-Ematching -max-split infinity -save-used-context try-alt-ergo.why
File "/home/mi/Bureau/po.why", line 3017, characters 1-2450:Valid (0.8750) (222)
</code></pre>
<ul>
<li>Since it is not possible to set a time limit in javascript. The "steps" mechanism should be used instead. This limit controls the number of calls to the decision procedure component of the solver.
</li>
<li>Currently, the integration of external plugins (such as our miniSAT-based SAT solver) is not supported
</li>
<li>Compared to AltGr-Ergo, statistics and debug information are only shown at the end of the execution.
</li>
<li>"Asking Alt-Ergo" may report "syntax error" on well formed files for Safari and Midori users. The "Load a Local File" button is not working on Opera browser.
</li>
</ul>
<p>[ Acknowledgement: this work is financially supported by the <a href="https://bware.lri.fr/">BWare project</a>. ]</p>
<h1>Comments</h1>
<p>Joshua Pratt (10 January 2020 at 5 h 20 min):</p>
<blockquote>
<p>Can the compiled alt-ergo.js be uploaded to npm? I’d love to use it in a web page I’m working on.</p>
</blockquote>
<p>OCamlPro (6 March 2020 at 16 h 03 min):</p>
<blockquote>
<p>Hi Joshua, thanks for passing by 🙂 We have no plans of building a bridge between a version of Alt-Ergo in JS and npm. However, you can tweak Alt-Ergo to suit your needs! We would recommend taking a look at Try Why3 http://why3.lri.fr/try/, where you will find a JavaScript version of Alt-Ergo. You can follow their instructions to build Alt-Ergo in JavaScript https://gitlab.inria.fr/why3/why3/tree/master/src/trywhy3</p>
</blockquote>
<p>Bharat Jayaraman (26 February 2020 at 17 h 37 min):</p>
<blockquote>
<p>This is VERY USEFUL tool!</p>
<p>I am using it in a course on software verification here in Buffalo. It’s great for checking verification conditions.</p>
<p>Many thanks,
Bharat</p>
</blockquote>
<p>OCamlPro (6 March 2020 at 16 h 04 min):</p>
<blockquote>
<p>Hi Bharat! Thank you for your message, we are always glad to hear from our users! If you feel so inclined, you can drop us an email at alt-ergo@ocamlpro.com to tell us more about your experience with Alt-Ergo and any feedback you may have.</p>
</blockquote>
OCamlPro Highlights: April 2014https://ocamlpro.com/blog/2014_05_20_ocamlpro_highlights_april_20142014-05-20T08:12:13Z2014-05-20T08:12:13Z
Çagdas Bozman
Here is a short report on some of our activities in April 2014, and a short analysis of OCaml evolution since its first release. OPAM Improvements We're still working on release 1.2. It was decided to include quite a few new features in this release, which delayed it a little bit since we want to be...<p>Here is a short report on some of our activities in April 2014, and a short analysis of OCaml evolution since its first release.</p>
<h3>OPAM Improvements</h3>
<p>We're still working on release 1.2. It was decided to include quite a few new features in this release, which delayed it a little bit since we want to be sure to get it right. It's now getting stabilized, documented and tested. One of the biggest improvements concerns the development workflow and the use of pinned packages, which is a powerful and complex feature that could also get a bit confusing. We are grateful for the large amount of feedback from the community that helped in its design. The basic idea is to use OPAM metadata from within the source packages, because it's most useful while developping and helps get the packaging right. It was possible before, but a little bit awkward : you now only need to provide an <code>opam</code> file or directory at the root of your project, and when pinned to either a local path or a version-controlled repository, opam will pick it up and use it. It will then be synchronized on any subsequent <code>opam update</code>. You can even do this if there is no corresponding package in the repository, OPAM will create it and store it in its internal repository for you. And in case this metadata is getting in the way, or you just want a quick local fix, you can always do <code>opam pin edit <package></code> to locally change the metadata used by opam.</p>
<p>During this month, we've also been improving performance by a large amount in several areas, because delays could become noticeable for people using it on eg. raspberry pis. There is an important clarification on the <a href="https://github.com/ocaml/opam/blob/master/doc/design/metadata-evolution">handling of optional dependencies</a>; and we worked hard on making the build of OPAM as painless as possible on every possible setting.</p>
<h3>OPAM Weather Service</h3>
<p>Last month, we presented an <a href="https://cudf-solvers.irill.org/">online service</a> for OPAM, to provide advanced CUDF solvers to every OPAM user. The service is provided by <a href="https://www.irill.org/">IRILL</a>, and based on the tools they implemented to manipulate CUDF files (some of them are also used directly in OPAM).</p>
<p>This month, we are happy to introduce a new service, that we helped them put online: the <a href="https://ows.irill.org/">OPAM Weather Service</a>, an instantiation for OPAM of a <a href="https://qa.debian.org/dose/debcheck.html">service</a> they also provide for Debian. It shows the evolution of the installability of all packages in the official OPAM repository, for <a href="https://ows.irill.org/table.html">three stable versions</a> of OCaml (3.12.1, 4.00.1 and 4.01.0). It should help maintainers track dependency problems with their packages, when old packages are removed or new conflicting dependencies are introduced.</p>
<h3>An Internship on OCaml Namespaces</h3>
<p>This month, we welcomed Pierrick Couderc for an internship in our lab. He is going to work on adding namespaces to OCaml. His goal is to design a kind of namespaces that extend the current module mechanism in a consistent but powerful way. One challenge of his job will be to make these namespaces also extend our <a href="/blog/2011_08_10_packing_and_functors">big functors</a> to provide functors at the namespace level.</p>
<p>Pierrick is not a complete newcomer in our team: last year, he already worked for us with David Maison (now working at TrustInSoft) on an online service to <a href="https://edit.ocamlpro.com/">edit and compile</a> OCaml code for students.</p>
<h3>The Evolution of OCaml Sources</h3>
<p>This month, there was also a lot of activity for the Core team, as we are closing to the feature freeze for OCaml 4.02. We took this opportunity to have a look at the evolution of OCaml sources since the first release of OCaml 1.00, in 1996.</p>
<p>Our first graph plots the size of uncompressed OCaml sources in bytes, from the first release to the current trunk:</p>
<p><img src="/blog/assets/img/graph_ocaml_bytes.png" alt="ocaml-bytes" /></p>
<p>The graph shows four interesting events:</p>
<ul>
<li>in 2002-2003, between 3.02 and 3.06, an increase of 4 MB
</li>
<li>in 2007, between 3.09.3 and 3.10.0, an increase of again 4 MB
</li>
<li>in 2013, between 4.00.1 and 4.01.0, an increase of 2 MB
</li>
<li>in 2014, between 4.01.0 and 4.02.0, a decrease of 6 MB
</li>
</ul>
<p>Our second graph plots the number of files per kind (OCaml sources, OCaml interfaces, C sources and C headers):</p>
<p><img src="/blog/assets/img/graph_ocaml_files.png" alt="ocaml-file" /></p>
<p>We can now check the files that were added and removed at the four events that we noticed on the first graph:</p>
<ul>
<li>the first event corresponds to the addition of 174 files for <code>camlp4</code> in 3.04, and then 70 files for <code>ocamldoc</code> in 3.06. Also, <code>labltk</code> increased a lot, with many new examples;
</li>
<li>the second event corresponds to the addition of 225 files for <code>ocamlbuild</code> in 3.10.0, and the replacement of <code>camlp4</code> (renamed into <code>camlp5</code>) by a new implementation;
</li>
<li>the third event corresponds to ... a change in the size of <code>boot/myocamlbuild.boot</code>, the bytecode file used by <code>ocamlbuild</code> to bootstrap itself !
</li>
<li>finally, the incoming new event corresponds to the removal of <code>camlp4</code> and <code>labltk</code> from 4.02, i.e. about 300 files for each of them.
</li>
</ul>
<p>Our third graph shows the number of lines per kind of file, again:</p>
<p><img src="/blog/assets/img/graph_ocaml_lines.png" alt="ocaml-lines" /></p>
<p>This graph does not show us much more than what we have seen by number of files, but what might be interesting is to compute the ratio, i.e. the number of lines per file, for each kind of file:</p>
<p><img src="/blog/assets/img/graph_ocaml_ratio.png" alt="ocaml-ratio" /></p>
<p>There is a general trend to increase the number of lines per file, from about 200 lines in an OCaml source file in 1996 to about 330 lines in 2014. This ratio increased considerably for release 3.04, because <code>camlp4</code> used to generate a huge bootstrap file of its own pre-preprocessed OCaml sources. More interestingly, the ratio didn't decrease in 2014, when <code>camlp4</code> was removed from the distribution ! Interface files also grew bigger, but most of the increase was in 3.06, when <code>ocamldoc</code> was added to the distribution, and an effort was done to document <code>mli</code> files.</p>
The Generic Syntax Extensionhttps://ocamlpro.com/blog/2014_04_01_the_generic_syntax_extension2014-04-01T08:12:13Z2014-04-01T08:12:13Z
Çagdas Bozman
OCaml 4.01 with its new feature to disambiguate constructors allows to do a nice trick: a simple and generic syntax extension that allows to define your own syntax without having to write complicated parsetree transformers. We propose an implementation in the form of a ppx rewriter. it does only a s...<p>OCaml 4.01 with its new feature to disambiguate constructors allows to do a nice trick: a simple and generic syntax extension that allows to define your own syntax without having to write complicated parsetree transformers. We propose an implementation in the form of a ppx rewriter.</p>
<p>it does only a simple transformation: replace strings prefixed by an operator starting with ! by a series of constructor applications</p>
<p>for instance:</p>
<pre><code class="language-ocaml">!! "hello 3"
</code></pre>
<p>is rewriten to</p>
<pre><code class="language-ocaml">!! (Start (H (E (L (L (O (Space (N3 (End))))))))
</code></pre>
<p>How is that generic ? We will present you a few examples.</p>
<h4>Base 3 Numbers</h4>
<p>For instance, if you want to declare base 3 arbitrary big numbers, let's define a syntax for it. We first start by declaring some types.</p>
<pre><code class="language-ocaml">type start = Start of p
and p =
| N0 of stop
| N1 of q
| N2 of q
and q =
| N0 of q
| N1 of q
| N2 of q
| Underscore of q
| End
and stop = End
</code></pre>
<p>This type will only allow to write strings matching the regexp 0 | (1|2)(0|1|2|_)*. Notice that some constructors appear in multiple types like N0. This is not a problem since constructor desambiguation will choose for us the right one at the right place. Let's now define a few functions to use it:</p>
<pre><code class="language-ocaml">open Num
let rec convert_p = function
| N0 (End) -> Int 0
| N1 t -> convert_q (Int 1) t
| N2 t -> convert_q (Int 2) t
and convert_q acc = function
| N0 t -> convert_q (acc */ Int 3) t
| N1 t -> convert_q (Int 1 +/ acc */ Int 3) t
| N2 t -> convert_q (Int 2 +/ acc */ Int 3) t
| Underscore t -> convert_q acc t
| End -> acc
let convert (Start p) = convert_p p
</code></pre>
<pre><code class="language-ocaml"># val convert : start -> Num.num = <fun>
</code></pre>
<p>And we can now try it:</p>
<pre><code class="language-ocaml">let n1 = convert (Start (N0 End))
# val n1 : Num.num = <num 0>
let n2 = convert (Start (N1 (Underscore (N0 End))))
# val n2 : Num.num = <num 3>
let n3 = convert (Start (N1 (N2 (N0 End))))
# val n3 : Num.num = <num 15>
</code></pre>
<p>And the generic syntax extension allows us to write:</p>
<pre><code class="language-ocaml">let ( !! ) = convert
let n4 = !! "120_121_000"
val n4 : Num.num = <num 11367>
</code></pre>
<h4>Specialised Format Strings</h4>
<p>We can implement specialised format strings for a particular usage. Here, for concision we will restrict to a very small subset of the classical format: the characters %, i, c and space</p>
<p>Let's define the constructors.</p>
<pre><code class="language-ocaml">type 'a start = Start of 'a a
and 'a a =
| Percent : 'a f -> 'a a
| I : 'a a -> 'a a
| C : 'a a -> 'a a
| Space : 'a a -> 'a a
| End : unit a
and 'a f =
| I : 'a a -> (int -> 'a) f
| C : 'a a -> (char -> 'a) f
| Percent : 'a a -> 'a f
</code></pre>
<p>Let's look at the inferred type for some examples:</p>
<pre><code class="language-ocaml">let (!*) x = x
let v = !* "%i %c";;
# val v : (int -> char -> unit) start = Start (Percent (I (Space (Percent (C End)))))
let v = !* "ici";;
# val v : unit start = Start (I (C (I End)))
</code></pre>
<p>This is effectively the types we would like for a format string looking like that. To use it we can define a simple printer:</p>
<pre><code class="language-ocaml">let rec print (Start cons) =
main cons
and main : type t. t a -> t = function
| I r ->
print_string "i";
main r
| C r ->
print_string "c";
main r
| Space r ->
print_string " ";
main r
| End -> ()
| Percent f ->
format f
and format : type t. t f -> t = function
| I r ->
fun i ->
print_int i;
main r
| C r ->
fun c ->
print_char c;
main r
| Percent r ->
print_string "%";
main r
let (!!) cons = print cons
</code></pre>
<p>And voila!</p>
<pre><code class="language-ocaml">let s = !! "%i %c" 1 'c';;
# 1 c
</code></pre>
<h3>How generic is it really ?</h3>
<p>It may not look like it, but we can do almost any syntax we might want this way. For instance we can do any regular language. To explain how we transform a regular language to a type definition, we will use as an example the language a(a|)b</p>
<pre><code class="language-ocaml">type start = Start of a
and a =
| A of a';
and a' =
| A of b
| B of stop
and b = B of stop
and stop = End
</code></pre>
<p>We can try a few things on it:</p>
<pre><code class="language-ocaml">let v = Start (A (A (B End)))
# val v : start = Start (A (A (B End)))
let v = Start (A (B End))
# val v : start = Start (A (B End))
let v = Start (B End)
# Characters 15-16:
# let v = Start (B End);;
# ^
# Error: The variant type a has no constructor B
let v = Start (A (A (A (B End))))
# Characters 21-22:
# let v = Start (A (A (A (B End))));;
# ^
# Error: The variant type b has no constructor A
</code></pre>
<p>Assumes the language is given as an automaton that:</p>
<ul>
<li>has 4 states, a, a', b and stop
</li>
<li>with initial state a
</li>
<li>with final state stop
</li>
<li>with transitions: a - A -> a' a' - A -> b a' - B -> stop b - B -> stop
let's write {c} for the constructor corresponding to the character c and
</li>
</ul>
<p>[c][/c]</p>
<p>for the type corresponding to a state of the automaton.</p>
<ul>
<li>For each state q we have a type declaration [q]
</li>
<li>For each letter a of the alphabet we have a constructor {a}
</li>
<li>For each transition p - l -> q we have a constructor {l} with parameter [q] in type [p]:
</li>
</ul>
<pre><code class="language-ocaml">type [p] = {l} of [q]
</code></pre>
<ul>
<li>The End constructor without any parameter must be present in any final state
</li>
<li>The initial state e is declared by
</li>
</ul>
<pre><code class="language-ocaml">type start = Start of [e]
</code></pre>
<h3>Yet more generic</h3>
<p>In fact we can encode deterministic context free languages (DCFL) also. To do that we encode pushdown automatons. Here we will only give a small example: the language of well parenthesized words</p>
<pre><code class="language-ocaml">type empty
type 'a r = Dummy
type _ q =
| End : empty q
| Rparen : 'a q -> 'a r q
| Lparen : 'a r q -> 'a q
type start = Start of empty q
let !! x = x
let m = ! ""
let m = ! "()"
let m = ! "((())())()"
</code></pre>
<p>To encode the stack, we use the type parameters: Lparen pushes an r to the stack, Rparen consumes it and End checks that the stack is effectively empty.</p>
<p>There are a few more tricks needed to encode tests on the top value in the stack, and a conversion of a grammar to Greibach normal form to allow this encoding.</p>
<h3>We can go even further</h3>
<h4>a^n b^n c^n</h4>
<p>In fact we don't need to restrict to DCFL, we can for instance encode the a^n.b^n.c^n language which is not context free:</p>
<pre><code class="language-ocaml">type zero
type 'a s = Succ
type (_,_) p =
| End : (zero,zero) p
| A : ('b s, 'c s) p -> ('b, 'c) p
| B : ('b, 'c s) q -> ('b s, 'c s) p
and (_,_) q =
| B : ('b, 'c) q -> ('b s, 'c) q
| C : 'c r -> (zero, 'c s) q
and _ r =
| End : zero r
| C : 'c r -> 'c s r
type start = Start of (zero,zero) p
let v = Start (A (B (C End)))
let v = Start (A (A (B (B (C (C End))))))
</code></pre>
<h4>Non recursive languages</h4>
<p>We can also encode solutions of Post Correspondance Problems (PCP), which are not recursive languages:</p>
<p>Suppose we have two alphabets A = { X, Y, Z } et O = { a, b } and two morphisms m1 and m2 from A to O* defined as</p>
<ul>
<li>m1(X) = a, m1(Y) = ab, m1(Z) = bba
</li>
<li>m2(X) = baa, m2(Y) = aa, m2(Z) = bb
</li>
</ul>
<p>Solutions of this instance of PCP are words such that their images by m1 and m2 are equal. for instance ZYZX is a solution: both images are bbaabbbaa. The language of solution can be represented by this type declaration:</p>
<pre><code class="language-ocaml">type empty
type 'a a = Dummy
type 'a b = Dummy
type (_,_) z =
| X : ('t1, 't2) s -> ('t1 a, 't2 b a a) z
| Y : ('t1, 't2) s -> ('t1 a b, 't2 a a) z
| Z : ('t1, 't2) s -> ('t1 b b a, 't2 b b) z
and (_,_) s =
| End : (empty,empty) s
| X : ('t1, 't2) s -> ('t1 a, 't2 b a a) s
| Y : ('t1, 't2) s -> ('t1 a b, 't2 a a) s
| Z : ('t1, 't2) s -> ('t1 b b a, 't2 b b) s
type start = Start : ('a, 'a) z -> start
let v = X (Z (Y (Z End)))
let r = Start (X (Z (Y (Z End))))
</code></pre>
<h3>Open question</h3>
<p>Can every context free language (not deterministic) be represented like that ? Notice that the classical example of the palindrome can be represented (proof let to the reader).</p>
<h3>Conclusion</h3>
<p>So we have a nice extension available that allows you to define a new syntax by merely declaring a type. The code is available on <a href="https://github.com/chambart/generic_ppx">github</a>. We are waiting for the nice syntax you will invent !</p>
<p>PS: Their may remain a small problem... If inadvertently you mistype something you may find some quite complicated type errors attacking you like a pyranha instead of a syntax error.</p>
OCamlPro Highlights: Feb 2014 https://ocamlpro.com/blog/2014_03_05_ocamlpro_highlights_feb_20142014-03-05T08:12:13Z2014-03-05T08:12:13Z
Çagdas Bozman
Here is a short report of some of our activities in February 2014 ! Displaying what OPAM is doing After releasing version 1.1.1, we have been very busy preparing the next big things for OPAM. We have also steadily been improving stability and usability, with a focus on friendly messages: for example...<p>Here is a short report of some of our activities in February 2014 !</p>
<h4>Displaying what OPAM is doing</h4>
<p>After releasing <a href="https://github.com/ocaml/opam/releases/tag/1.1.1">version 1.1.1</a>, we have been very busy preparing the <a href="https://github.com/ocaml/opam/wiki/Roadmap">next big things</a>
for OPAM. We have also steadily been improving stability and usability,
with a focus on friendly messages: for example, there is a <a href="https://github.com/ocaml/opam/commit/4d1a79a0a92456872e4986de6d7cfc07a7ce4c7c">whole new algorithm</a> to give the best explanations on what OPAM is going to do and why:</p>
<p>With OPAM 1.1.1, you currently get this information:</p>
<pre><code class="language-shell-session">## opam install custom_printf.109.15.00
The following actions will be performed:
– remove pa_bench.109.55.02
– downgrade type_conv.109.60.01 to 109.20.00 [required by comparelib, custom_printf]
– downgrade uri.1.4.0 to 1.3.11
– recompile variantslib.109.15.03 [use type_conv]
– downgrade sexplib.110.01.00 to 109.20.00 [required by custom_printf]
– downgrade pa_ounit.109.53.02 to 109.18.00 [required by custom_printf]
– recompile ocaml-data-notation.0.0.11 [use type_conv]
– recompile fieldslib.109.20.03 [use type_conv]
– recompile dyntype.0.9.0 [use type_conv]
– recompile deriving-ocsigen.0.5 [use type_conv]
– downgrade comparelib.109.60.00 to 109.15.00
– downgrade custom_printf.109.60.00 to 109.15.00
– downgrade cohttp.0.9.16 to 0.9.15
– recompile cow.0.9.1 [use type_conv, uri]
– recompile github.0.7.0 [use type_conv, uri]
0 to install | 7 to reinstall | 0 to upgrade | 7 to downgrade | 1 to remove
</code></pre>
<p>With the next <code>trunk</code> version of OPAM, you will get the much more informative output on real dependencies:</p>
<pre><code class="language-shell-session">## opam install custom_printf.109.15.00
The following actions will be performed:
– remove pa_bench.109.55.02 [conflicts with type_conv, pa_ounit]
– downgrade type_conv from 109.60.01 to 109.20.00 [required by custom_printf]
– downgrade uri from 1.4.0 to 1.3.10 [uses sexplib]
– recompile variantslib.109.15.03 [uses type_conv]
– downgrade sexplib from 110.01.00 to 109.20.00 [required by custom_printf]
– downgrade pa_ounit from 109.53.02 to 109.18.00 [required by custom_printf]
– recompile ocaml-data-notation.0.0.11 [uses type_conv]
– recompile fieldslib.109.20.03 [uses type_conv]
– recompile dyntype.0.9.0 [uses type_conv]
– recompile deriving-ocsigen.0.5 [uses type_conv]
– downgrade comparelib from 109.60.00 to 109.15.00 [uses type_conv]
– downgrade custom_printf from 109.60.00 to 109.15.00
– downgrade cohttp from 0.9.16 to 0.9.14 [uses sexplib]
– recompile cow.0.9.1 [uses type_conv]
– recompile github.0.7.0 [uses uri, cohttp]
0 to install | 7 to reinstall | 0 to upgrade | 7 to downgrade | 1 to remove
</code></pre>
<p>Failsafe behaviour is being much improved as well, because things do
happen to go wrong when you access the network to download packages and
then compile them, and that was the biggest source of problems for our
users: errors are now more <a href="https://github.com/ocaml/opam/commit/f8808c603820771627a6a8477778a5f52e46758f">tightly controlled</a> in <a href="https://github.com/ocaml/opam/commit/c52a2f2ef12ad93f2838907ab3e5ac38d631703b">each stage</a> of the opam command.</p>
<p>For example, nothing will be changed in case of a failed or interrupted download, and if you press <code>C-c</code> in the middle of an action, you’ll get something like this:</p>
<pre><code class="language-shell-session">[ERROR] User interruption while waiting for sub-processes
[ERROR] Failure while processing typerex.1.99.6-beta
=-=-= Error report =-=-=
These actions have been completed successfully
install conf-gtksourceview.2
upgrade cmdliner from 0.9.2 to 0.9.4
The following failed
install typerex.1.99.6-beta
Due to the errors, the following have been cancelled
install ocaml-top.1.1.2
install ocp-index.1.0.2
install ocp-build.1.99.6-beta
recompile alcotest.0.2.0
install ocp-indent.1.4.1
install lablgtk.2.16.0
The former state can be restored with opam switch import -f “<xxx>.export”
</code></pre>
<p>You also shouldn’t have to dig anymore to find the most meaningful error when something fails.</p>
<p>With the ever-increasing number of packages and versions, resolving
requests becomes a real challenge and we’re glad we made the choice to
rely on specialized solvers. The built-in heuristics may show its limits
when attempting <a href="https://github.com/ocaml/opam-rt/commit/f15c492b1a21ccd99e140a3d440330dd0d39a8ff">long-delayed upgrades</a>, and everybody is encouraged to install an external solver (<a href="http://potassco.sourceforge.net/index.html">aspcud</a> being the one supported at the moment).</p>
<p>Consequently, we have also been working more tightly with the Mancoosi team at <a href="http://www.irill.org/">IRILL</a> to <a href="https://github.com/ocaml/opam/commit/d3dd9b0ef46881987251f3e375e86dd209b034b8">improve interaction with the solver</a>, and how the user can <a href="https://github.com/ocaml/opam/wiki/Specifying_Solver_Preferences">get the best of it</a> is now well documented, thanks to Roberto Di Cosmo.</p>
<h4>Per-projects OPAM Switches with <code>ocp-manager</code></h4>
<p>At OCamlPro, we often use OPAM with multiple switches, to test
whether our tools are working with different versions of OCaml,
including the new ones that we are developing. Switching between
versions is not always as intuitive as we would like, as we sometimes
forget to call</p>
<pre><code class="language-shell-session">$ eval `opam config env`
</code></pre>
<p>in the right location or at the good time, and end up compiling a
project with a different version of OCaml that we would have liked.</p>
<p>It was quite surprising to discover that a tool that we had developed a long time ago, <a href="http://www.typerex.org/ocp-manager.html">ocp-manager</a>, would actually become a solution for us to a problem that appeared just now with OPAM: <code>ocp-manager</code>
was a tool we used to switch between different versions of OCaml before
OPAM. It would use a directory of wrappers, one for each OCaml tool,
and by adding this directory once and for all to the PATH, with:</p>
<pre><code class="language-shell-session">$ eval `ocp-manager -config`
</code></pre>
<p>You would be able to switch to OPAM switch 3.12.1 (that needs to have been installed first with OPAM) immediatly by using:</p>
<p>[code language=”bash” gutter=”false”]</p>
<pre><code class="language-shell-session">$ ocp-manager -set opam:3.12.1
</code></pre>
<p>Nothing much different from OPAM ? The nice thing with <code>ocp-manager</code>
is that wrappers also use environment variables and per-directory
information to choose the OCaml version of the tool they are going to
run. For example, if some top-directory of your project contains a file <code>.ocp-switch</code>
with the line “opam:4.01.0”, your project will always be compiled with
this version of OCaml, even if you change the global per-user
configuration. You can also override the global and local configuration
by setting the <code>OCAML_VERSION</code> environment variable.</p>
<p>Maybe <a href="https://www.typerex.org/ocp-manager.html">ocp-manager</a> can also be useful for you. Just install it with <code>opam install ocp-manager</code>,
change your shell configuration to add its directory to your PATH, and
check if it also works for you (the manpage can be very useful!).</p>
<h4>Optimization Patches for <code>ocamlopt</code> under Reviewing Process</h4>
<p>This month, we also spent a lot of time improving the optimization patches that <a href="https://github.com/chambart/ocaml">we submitted</a>
for inclusion into OCaml, and that we have described in our previous
blog posts. Mark Shinwell from Jane Street and Gabriel Scherer from
INRIA kindly accepted to devote some of their time in a thorough
reviewing process, leading to many improvements in the readability and
maintenability of our optimization code. As this first patch is a
prerequisite for our next patches, we also spent a lot of time
propagating these modifications, so that we will be able to submit them
faster once this one has been merged!</p>
<h4>Displaying the Distribution of Block Sizes with <code>ocp-memprof</code></h4>
<p>In our study to understand the memory behavior of OCaml applications,
we have investigated the distribution of block sizes, both in the heap
(live blocks) and in the free list (dead blocks). This information
should help the programmer to understand which GC parameters might be
the best ones for his application, by showing the fragmentation of the
heap and the time spent searching in the free list. It is all the more
important that improving the format of the free list with bins has been
discussed lately in the Core team.</p>
<p>Here, we display the distribution of blocks at a snapshot during the execution of <code>why3replayer</code>, a tool that we are trying to optimize during the <a href="https://bware.lri.fr/index.php/BWare_project">Bware Project</a>. The number of free blocks is displayed darker than live blocks, from size 21 to size 0.</p>
<p><img src="/blog/assets/img/graph_blocks_stats1.png" alt="blocks_stats" /></p>
<p>It is interesting to notice that, for this applications, almost all
allocations have a size smaller than 6. We are planning to use such
information to simulate the cost of allocation for this application, and
see which data structure for the free list would benefit the most to
the performance of the application.</p>
<h4>Whole Program Analysis</h4>
<p>The static OCaml analyszer is going quite well. Our set of (working) <a href="https://github.com/OCamlPro/ocaml-data-analysis/tree/master/test/samples">test samples</a> is growing in size and complexity. Our last improvement was what is called <strong>widening</strong>.
What’s widening ? Well, the main idea is “when I go through a big loop
5000 times, I don’t want the analyzer to do that too”. If we take this
sample test:</p>
<pre><code class="language-ocaml">let () = for i = 0 to 5000 do () done
</code></pre>
<p>Without widening, the analysis would loop 5000 times through that
loop. That’s quite useless, not to mention that replacing 5000 by <code>Random.int ()</code> would make the analysis loop until max_int (2^62 times on a 64-bits computer) ! Worse, let’s take this code:</p>
<pre><code class="language-ocaml">let () =
let x = ref 0 in
for i = 1 to 10 do
x := !x + 1
done
</code></pre>
<p>Here, the analysis would not see that the increment on !x and i would
be linked (that’s one of the aproximations we do to make the
computation doable). So, the analyzer does not loop ten times, but again
2^62 times: we do not want that to happen.</p>
<p>The good news now: we can say to the analyzer “every time you go
through a loop, check what integers you incremented, and suppose you’ll
increment them again and again until you can’t”. This way we only go
twice through our for-loop: first to discover it, then to propagate the
widening approximation.</p>
<p>Of course this is not that simple, and we’ll often loose information
by doing only two iterations. But in most cases, we don’t need it or we
can get it in a quicker way than iterating billions of times through a
small loop.</p>
<p>Hopefully, we’ll soon be able to analyze any simple program that uses only <code>Pervasives</code> and the basic language features, but <code>for</code> and <code>while</code> loops are already a good starting point !</p>
<h4>SPARK 2014: a Use-Case of Alt-Ergo</h4>
<p>The SPARK toolset, developped by the <a href="https://www.adacore.com/">AdaCore</a>
company, targets the verification of programs written in the SPARK
language; a subset of Ada used in the design of critical systems. We
published this month a <a href="https://alt-ergo.ocamlpro.com/use_cases.php">use-case</a> of Alt-Ergo that explains the integration of our solver as a back-end of the <a href="https://www.spark-2014.org/">next generation of SPARK</a>.</p>
<p>Discussions with SPARK 2014 developpers were very important for us to
understand the strengths of Alt-Ergo for them and what would be
improved in the solver. We hope this use-case will be helpful for IT
solutions providers that would need an automatic solver in their
products.</p>
<h4>Scilab 5 or Scilab 6 ?</h4>
<p>We are still working at improving the Scilab environment with new
tools written in OCaml. We are soon going to release a new version of <a href="https://scilint.ocamlpro.com/">Scilint</a>,
our style-checking tool for Scilab code, with a new parser compatible
with Scilab 5 syntax. Changing the parser of Scilint was not an easy
job: while our initial parser was partially based on the yacc parser of
the future Scilab 6, we had to write the new parser from scratch to
accept the more tolerant syntax of Scilab 5. It was also a good
opportunity to design a cleaner AST than the one copied from Scilab 6:
written in C++, Scilab 6 AST would for example have all AST nodes
inherit from the <code>Exp</code> class, even instructions or the list of parameters of a function prototype !</p>
<p>We have also started to work on a type-system for Scilab. We want the
result to be a type language expressive enough to express, say, the
(dependent) sizes of matrices, yet simple enough for clash messages not
to be complete black magic for Scilab programmers. This is not simple.
In particular, there is the other constraint to build a versatile type
system that could serve a JIT or give usable information to the
programmer. Which means that the type environment is a mix of static
information coming from the inference and of annotations, and dynamic
information gotten by introspection of the dynamic interpreter.</p>
<p>In the mean time, we are also planning to write a simpler JIT, to
mitigate the impatience of Scilab programmers expecting to feel the
underlying power of OCaml!</p>
OCamlPro Highlights: Dec 2013 & Jan 2014 https://ocamlpro.com/blog/2014_02_05_ocamlpro_highlights_dec_2013_jan_20142014-02-05T08:12:13Z2014-02-05T08:12:13Z
Çagdas Bozman
Here is a short report of some of our activities in last December and January ! A New Intel Backend for ocamlopt With the support of LexiFi, we started working on a new Intel backend for the ocamlopt native code compiler. Currently, there are four Intel backends in ocamlopt: amd64/emit.mlp, amd64/em...<p>Here is a short report of some of our activities in last December and January !</p>
<h3>A New Intel Backend for ocamlopt</h3>
<p>With the support of LexiFi, we started working on a new Intel backend for the <code>ocamlopt</code> native code compiler. Currently, there are four Intel backends in <code>ocamlopt</code>: <code>amd64/emit.mlp</code>, <code>amd64/emit_nt.mlp</code>, <code>i386/emit.mlp</code> and <code>i386/emit_nt.mlp</code>, i.e. support for two processors (amd64 and i386) and two OS variants (Unices and Windows). These backends directly output assembly sources files, on which the platform assembler is called (<code>gas</code> on Unices, and <code>masm</code> on Windows).</p>
<p>The current organisation makes it hard to maintain these backends: code for a given processor has to be written in two almost identical files (Unix and Windows), with subtle differences in the syntax: for example, the destination operand is the second parameter in <code>gas</code> syntax, while it is the first one in AT&T syntax (<code>masm</code>).</p>
<p>Our current work aims at merging, for each processor, the Unix and Windows backends, by making them generate an abstract representation of the assembly. This representation is shared between the two processors ('amd64' and 'i386'), so that we only have to develop two printers, one for <code>gas</code> syntax and one for <code>masm</code> syntax. As a consequence, maintenance of the backend will be much easier: while writting the assembly code, the developer does not need to care about the exact syntax. Moreover, the type-checker can verify that each assembler instruction is used with the correct number of well-formatted operands.</p>
<p>Finally, our hope is that it will be also possible to write optimization passes directly on the assembly representation, such as peephole optimizations or instruction re-scheduling. This work is available in OCaml SVN, in the <a href="https://caml.inria.fr/cgi-bin/viewvc.cgi/ocaml/branches/abstract_x86_asm/asmcomp/intel_proc.ml?view=markup">"abstract_x86_asm" branch</a>.</p>
<h3>OPAM, new Release 1.1.1</h3>
<p>OPAM has been shifted from the 1.1.0-RC to 1.1.1, with large stability and UI improvements. We put a lot of effort on improving the interface, and on helping to build other tools in the emerging ecosystem around OPAM. Louis visited OCamlLabs, which was a great opportunity to discuss the future of OPAM and the platform, and contribute to their effort towards <a href="https://github.com/ocaml/opam/issues/1035">opam-in-a-box</a>, a new way to generate pre-configured VirtualBox instances with all OCaml packages locally installable by OPAM, particularly convenient for computer classrooms.</p>
<p>The many plans and objectives on OPAM can be seen and discussed on the work-in-progress <a href="https://github.com/ocaml/opam/wiki/Roadmap">OPAM roadmap</a>. Lots of work is ongoing for the next releases, including Windows support, binary packages, and allowing more flexibility by shifting the compiler descriptions to the packages.</p>
<h3><code>ocp-index</code> and its new Brother, <code>ocp-grep</code></h3>
<p>On our continued efforts to improve the environment and tools for OCaml hackers, we also made some extensions to <code>ocp-index</code>, which in addition to completing and documenting the values from your libraries, using binary annotations to jump to value definitions, now comes with a tiny <code>ocp-grep</code> tool that offers the possibility to syntactically locate instances of a given identifier around your project - handling <code>open</code>, local opens, module aliases, etc. In emacs, <code>C-c /</code> will get the fully qualified version of the ident under cursor and find all its uses throughout your project. Simple, efficient and very handy for refactorings. The <code>ocp-index</code> query interface has also been made more expressive. Some documentation is <a href="https://www.typerex.org/ocp-index.html">online</a> and will be available shortly in upcoming release 1.1.</p>
<h3><code>ocp-cmicomp</code>: Compression of Interface Files for Try-OCaml</h3>
<p>While developing Try-OCaml, we noticed a problem with big compiled interface files (.cmi). In Try-OCaml, such files are embedded inside the JavaScript file by <code>js_of_ocaml</code>, resulting in huge code files to download on connection (about 12 MB when linking <code>Dom_html</code> from <code>js_of_ocaml</code>, and about 40 MB when linking <code>Core_kernel</code>), and the browser freezing for a few seconds when opening the corresponding modules.</p>
<p>To reduce this problem, we developed a tool, called <code>ocp-cmicomp</code>, to compress compiled interface files. A compiled interface file is just a huge OCaml data structure, marshalled using <code>output_value</code>. This data structure is often created by copying values from other interface files (types, names of values, etc.) during the compilation process. As this is done transitively, the data structure has a lot of redundancy, but has lost most of the sharing. <code>ocp-cmicomp</code> tries to recover this sharing: it iterates on the data structure, hash-consing the immutable parts of it, to create a new data structure with almost optimal sharing.</p>
<p>To increase the opportunities for sharing, <code>ocp-cmicomp</code> also uses some heuristics: for example, it computes the most frequent methods in objects, and sort the list of methods of each object type in increasing order of frequency. As a consequence, different object types are more likely to share the same tail. Finally, we also had to be very careful: the type-checker internally uses a lot physical comparison between types (especially with polymorphic variables and row variables), so that we still had to prevent sharing of some immutable parts to avoid typing problems.</p>
<p>The result is quite interesting. For example, <code>dom_html.cmi</code> was reduced from 2.3 MB to 0.7 MB (-71%, with a lot of object types), and the corresponding JavaScript file for Try-OCaml decreased from 12 MB to 5 MB. <code>core_kernel.cmi</code> was reduced from 13.5 MB to 10 MB (-26%, no object types), while the corresponding JavaScript decreased from 40 MB to 30 MB !</p>
<h3>OCamlRes: Bundling Auxiliary Files at Compile Time</h3>
<p>A common problem when writing portable software is to locate the resources of the program, and its predefined configuration files. The program has to know the system on which it is running, which can be done like in old times by patching the source, generating a set of globals or at run-time. Either way, paths may then vary depending on the system. For instance, paths are often completely static on Unix while they are partially forged on bundled MacOS apps or on Windows. Then, there is always the task of bundling the binary with its auxiliary files which depends on the OS.</p>
<p>For big apps with lots of system interaction, it is something you have to undertake. However, for small apps, it is an unjustified burden. The alternative proposed by <a href="http://www.typerex.org/ocp-ocamlres.html">OCamlRes</a> is to bundle these auxiliary files at compile time as an OCaml module source. Then, one can just compile the same (partially pre-generated) code for all platforms and distribute all-inclusive, naked binary files. This also has the side advantage of turning run-time exceptions for inexistent or invalid files to compile-time errors. OCamlRes is made of two parts:</p>
<ul>
<li>an <code>ocplib-ocamlres</code> library to manipulate resources at run-to time, scan input files to build resource trees, and to dump resources in various formats
</li>
<li>a command line tool <code>ocp-ocamlres</code>, that reads the ressources and bundles them into OCaml source files.
</li>
</ul>
<p>OCamlRes has several output formats, some more subtle than the default mechanism (which is to transform a directory structure on the disk into an OCaml static tree where each file is replaced by its content), and can (and will) be extended. An example is detailed in the <a href="https://www.typerex.org/ocp-ocamlres.html">documentation</a> file.</p>
<h3>Compiler optimisations</h3>
<p>The last post mentioned improvements on the prototype compiler optimization allowing recursive functions specialization. Some quite complicated corner cases needed a rethink of some parts of the architecture. The first version of the patch was meant to be as simple as possible. To this end we chose to avoid as much as possible the need to maintain non trivialy checkable invariants on the intermediate language. That decision led us to add some constraints on what we allowed us to do. One such constraint that matters here, is that we wanted every crucial information (that break things up if the information is lost) to be propagated following the scope. For instance, that means that in a case like:</p>
<pre><code class="language-ocaml">let x = let y = 1 in (y,y) in x
</code></pre>
<p>the information that <code>y</code> is an integer can escape its scope but if the information is lost, at worst the generated code is not as good as possible, but is still correct. But sometimes, some information about functions really matters:</p>
<pre><code class="language-ocaml">let f x =
let g y = x + y in
g
let h a =
let g = f a in
g a
</code></pre>
<p>Let's suppose in this example that <code>f</code> cannot be inlined, but <code>g</code> can. Then, <code>h</code> becomes (with <code>g.x</code> being access to <code>x</code> in the closure of <code>g</code>):</p>
<pre><code class="language-ocaml">let h a =
let g = f a in
a + g.x
</code></pre>
<p>Let's suppose that some other transformation elsewhere allowed <code>f</code> to be inlined now, then <code>h</code> becomes:</p>
<pre><code class="language-ocaml">let h a =
let x = a in
let g y = x + y in (* and the code can be eliminated from the closure *)
a + g.x
</code></pre>
<p>Here the closure of of <code>g</code> changes: the code is eliminated so only the <code>x</code> field is kept in the block, hence changing its offset. This information about the closure (what is effectively available in the closure) must be propagated to the use point (<code>g.x</code>) to be able to generate the offset access in the block. If this information is lost, there is no way to compile that part. The way to avoid that problem was to limit a bit the kind of cases where inlining is possible so that this kind of information could always be propagated following the scope. But in fact a few cases did not verify that property when dealing with inlining parameters from different compilation unit.</p>
<p>So we undertook to rewrite some part to be able to ensure that those kinds of information are effectively correctly propagated and add assertions everywhere to avoid forgeting a case. The main problem was to track all the strange corner cases, that would almost never happen or wouldn't matter if they were not optimally compiled, but must not loose any information to satisfy the assertions.</p>
<h3>Alt-Ergo: More Confidence and More Features</h3>
<h4>Formalizing and Proving a Critical Part of the Core</h4>
<p>Last month, we considered the formalization and the proof of a critical component of Alt-Ergo's core. This component handles equalities solving to produce equivalent substitutions. However, since Alt-Ergo handles several theories (linear integer and rational arithmetic, enumerated datatypes, records, ...), providing a global routine that combines solvers of these individual theories is needed to be able to solve mixed equalities.</p>
<p>The example below shows one of the difficulties we faced when designing our combination algorithm: the solution of the equality <code>r = {a = r.a + r.b; b = 1; c = C}</code> cannot just be of the form <code>r |-> {a = r.a + r.b; b = 1; c = C}</code> as the pivot r appears in the right-hand side of the solution. To avoid this kind of subtle occur-checks, we have to solve an auxiliary and simpler conjunction of three equalities in our combination framework: <code>r = {a = k1 + k2; b = 1; c = C}</code>, <code>r.a = k1</code> and <code>r.b = k2</code> (where <code>k1</code> and <code>k2</code> are fresh variables). We will then deduce that <code>k2 |-> 1</code> and that <code>k1 + k2 = k1</code>, which has no solution.</p>
<pre><code class="language-ocaml">type enum = A | B | C
type t = { a : int ; b : enum }
logic r : t
goal g: r = {a = r.a + r.b; b = 1; c = C} -> false
</code></pre>
<p>After having implemented a new combination algorithm in Alt-Ergo a few months ago, we considered its formalization and its proof, as we have done with most of the critical parts of Alt-Ergo. It was really surprising to see how types information associated to Alt-Ergo terms helped us to prove the termination of the combination algorithm, a crucial property that was hard to prove in our previous combination algorithms, and a challenging research problem as well.</p>
<h4>Models Generation</h4>
<p>On the development side, we conducted some preliminary experiments in order to extend Alt-Ergo with a model generation feature. This capability is useful to derive concrete test-cases that may either exhibit erroneous behaviors of the program being verified or bugs in its formal specification.</p>
<p>In a first step, we concentrated on model generation for the combination of the theories of uninterpreted functions and linear integer arithmetic. The following example illustrates this problem:</p>
<pre><code class="language-ocmal">logic f : int -> int
logic x, y : int
goal g: 2*x >= 0 -> y >= 0 -> f(x) <> f(y) -> false
</code></pre>
<p>We have a satisfiable (but non-valid) formula, where <code>x</code> and <code>y</code> are in the integer intervals <code>[0,+infinity[</code> and <code>f(x) <> f(y)</code>. We would like to find concrete values for <code>x</code>, <code>y</code> and <code>f</code> that satisfy the formula. A first attempt to answer this question may be the following:</p>
<ul>
<li>From an arithmetic point of view, <code>x = 0</code> and <code>y = 0</code> are possible values for <code>x</code> and <code>y</code>. So, Linear arithmetic suggests this partial model to other theories.
</li>
<li>The theory of uninterpreted functions cannot agree with this solution. In fact, <code>x = y = 0</code> would imply <code>f(x) = f(y)</code>, which contradicts <code>f(x) <> f(y)</code>. More generally, <code>x</code> should be different from <code>y</code>.
</li>
<li>Now, if linear arithmetic suggests, <code>x = 0</code> and <code>y = 1</code>, the theory of uninterpreted functions will agree. The next step is to find integer values for <code>f(0)</code> and <code>f(1)</code> such that <code>f(0) <> f(1)</code>.
</li>
</ul>
<p>After having implemented a brute force technique that tries to construct such models, our main concern now is to find an elegant and more efficient "divide and conquer" strategy that allows each theory to compute its own partial model with guarantees that this model will be coherent with the partial models of the other theories. It would be then immediate to automatically merge these partial solutions into a global one.</p>
OPAM 1.1.1 releasedhttps://ocamlpro.com/blog/2014_01_29_opam_1.1.1_released2014-01-29T08:12:13Z2014-01-29T08:12:13Z
Louis Gesbert
We are proud to announce that OPAM 1.1.1 has just been released. This minor release features mostly stability and UI/doc improvements over OPAM 1.1.0, but also focuses on improving the API and tools to be a better base for the platform (functions for opam-doc, interface with tools like opamfu and op...<p>We are proud to announce that <em>OPAM 1.1.1</em> has just been released.</p>
<p>This minor release features mostly stability and UI/doc improvements over
OPAM 1.1.0, but also focuses on improving the API and tools to be a better
base for the platform (functions for <code>opam-doc</code>, interface with tools like
<code>opamfu</code> and <code>opam-installer</code>). Lots of bigger changes are in the works, and
will be merged progressively after this release.</p>
<h2>Installing</h2>
<p>Installation instructions are available
<a href="http://opam.ocaml.org/doc/Quick_Install.html">on the wiki</a>.</p>
<p>Note that some packages may take a few days until they get out of the
pipeline. If you're eager to get 1.1.1, either use our
<a href="https://raw.github.com/ocaml/opam/master/shell/opam_installer.sh">binary installer</a> or
<a href="https://github.com/ocaml/opam/releases/tag/1.1.1">compile from source</a>.</p>
<p>The 'official' package repository is now hosted at <a href="https://opam.ocaml.org">opam.ocaml.org</a>,
synchronised with the Git repository at
<a href="http://github.com/ocaml/opam-repository">http://github.com/ocaml/opam-repository</a>,
where you can contribute new packages descriptions. Those are under a CC0
license, a.k.a. public domain, to ensure they will always belong to the
community.</p>
<p>Thanks to all of you who have helped build this repository and made OPAM
such a success.</p>
<h2>Changes</h2>
<p>From the changelog:</p>
<ul>
<li>Fix <code>opam-admin make <packages> -r</code> (#990)
</li>
<li>Explicitly prettyprint list of lists, to fix <code>opam-admin depexts</code> (#997)
</li>
<li>Tell the user which fields is invalid in a configuration file (#1016)
</li>
<li>Add <code>OpamSolver.empty_universe</code> for flexible universe instantiation (#1033)
</li>
<li>Add <code>OpamFormula.eval_relop</code> and <code>OpamFormula.check_relop</code> (#1042)
</li>
<li>Change <code>OpamCompiler.compare</code> to match <code>Pervasives.compare</code> (#1042)
</li>
<li>Add <code>OpamCompiler.eval_relop</code> (#1042)
</li>
<li>Add <code>OpamPackage.Name.compare</code> (#1046)
</li>
<li>Add types <code>version_constraint</code> and <code>version_formula</code> to <code>OpamFormula</code> (#1046)
</li>
<li>Clearer command aliases. Made <code>info</code> an alias for <code>show</code> and added the alias
<code>uninstall</code> (#944)
</li>
<li>Fixed <code>opam init --root=<relative path></code> (#1047)
</li>
<li>Display OS constraints in <code>opam info</code> (#1052)
</li>
<li>Add a new 'opam-installer' script to make <code>.install</code> files usable outside of opam (#1026)
</li>
<li>Add a <code>--resolve</code> option to <code>opam-admin make</code> that builds just the archives you need for a specific installation (#1031)
</li>
<li>Fixed handling of spaces in filenames in internal files (#1014)
</li>
<li>Replace calls to <code>which</code> by a more portable call (#1061)
</li>
<li>Fixed generation of the init scripts in some cases (#1011)
</li>
<li>Better reports on package patch errors (#987, #988)
</li>
<li>More accurate warnings for unknown package dependencies (#1079)
</li>
<li>Added <code>opam config report</code> to help with bug reports (#1034)
</li>
<li>Do not reinstall dev packages with <code>opam upgrade <pkg></code> (#1001)
</li>
<li>Be more careful with <code>opam init</code> to a non-empty root directory (#974)
</li>
<li>Cleanup build-dir after successful compiler installation to save on space (#1006)
</li>
<li>Improved OSX compatibility in the external solver tools (#1074)
</li>
<li>Fixed messages printed on update that were plain wrong (#1030)
</li>
<li>Improved detection of meaningful changes from upstream packages to trigger recompilation
</li>
</ul>
OCamlPro Highlights: November 2013https://ocamlpro.com/blog/2013_12_02_ocamlpro_highlights_november_20132013-12-03T08:12:13Z2013-12-03T08:12:13Z
Fabrice Le Fessant
New Team Members We are pleased to welcome three new members in our OCamlPro team since the beginning of November: Benjamin Canou started working at OCamlPro on the Richelieu project, an effort to bring better safety and performance to the Scilab language. He is in charge of a type inference algorit...<h2>New Team Members</h2>
<p>We are pleased to welcome three new members in our OCamlPro team since the beginning of November:</p>
<ul>
<li>Benjamin Canou started working at OCamlPro on the Richelieu project,
an effort to bring better safety and performance to the
<a href="https://www.scilab.org/">Scilab</a> language. He is in charge of a
type inference algorithm that will serve both as a developper tool
and in coordination with a JIT. He spent his first month
understanding the darkest corners of the language, and then writing
a versatile AST with a parser to build it. Actually, this is not an
easy task, because the language gives different statuses to
characters (including spaces) depending on the context, leading to
non-trivial lexing. But the real source of problems is the fact that
the original lexparser is intermingled with the interpreter inside a
big bunch of venerable FORTRAN code. This old fellow makes parsing
choices depending on the dynamic typing context, allows its users to
catch syntax errors at runtime, among other fun things. The new
OCaml lexer and parser is handwritten in around a thousand lines,
has performance comparable to a <code>Lex</code> and <code>Yacc</code> generated one, and
is resilient to errors so it could be integrated into an IDE to
detect errors on the fly without stopping on the first one. Once
again, it’s OCaml to the rescue of the weak and elderly!An example
of the kind of code that can be written in Scilab:
</li>
</ul>
<pre><code class="language-scilab">if return = while then [ 12..
34.. … .. …
56 } ; else ‘”‘”
end
</code></pre>
<p>which is parsed into:</p>
<pre><code class="language-scilab">— parsed in 0.000189–
(script (if (== !return !while) (matrix (row 123456)) “‘”))
— messages
1.10:1.11: use of deprecated operator ‘=’
— end
</code></pre>
<ul>
<li>
<p>Gregoire Henry started working at OCamlPro on the Bware project. He
is tackling the optimization of memory performance of automatic
provers written in OCaml, in collaboration with Cagdas Bozman. One
of his first contributions after joining us was to exhume his
internship work of 2004, an implementation of <a href="https://github.com/OCamlPro/ocplib-graphics">Graphics for Mac OS
X</a> that we are going to
use for our online OCaml IDE!</p>
</li>
<li>
<p>Thomas Blanc started a PhD at OCamlPro after his summer internship
with us. He is going to continue his work on whole-program analysis,
especially as a way to detect uncaught exceptions. We hope his tool
will be a good replacement for the <a href="https://github.com/OCamlPro/ocamlexc">ocamlexn
tool</a>
written by Francois Pessaux.</p>
</li>
</ul>
<h2>Compiler Updates</h2>
<p>On the compiler optimization front, Pierre Chambart got direct access
to the OCaml SVN, so that he will soon upload his work directly into
an SVN branch, easier for reviewing and integration into the official
compiler. A current set of optimizations is already scheduled for the
new branch, and is now working on inlining recursive functions, such
<code>List.map</code>, by inlining the function definition at the call site, when
at least one of its arguments is invariant during recursion.</p>
<p>A function that can benefit a lot from that transformation is:</p>
<pre><code class="language-ocaml">let f l = List.fold_left (+) 0 l
</code></pre>
<pre><code>camlTest__f_1013:
.L102:
movq %rax, %rdi
movq $1, %rbx
jmp camlTest__fold_left_1017@PLT
camlTest__fold_left_1017:
.L101:
cmpq $1, %rdi
je .L100
movq 8(%rdi), %rsi
movq (%rdi), %rdi
leaq -1(%rbx, %rdi), %rbx
movq %rsi, %rdi
jmp .L101
.align 4
.L100:
movq %rbx, %rax
ret
</code></pre>
<h2>Development Tools</h2>
<h3>Release of OPAM 1.1</h3>
<p>After lots of testing and fixing, the official version 1.1.0 of OPAM
has finally been released. It features lots of stability improvements,
and a reorganized and cleaner repo now hosted at
<a href="https://opam.ocaml.org">https://opam.ocaml.org</a>. Work goes on on OPAM
as we’ll release <code>opam-installer</code> soon, a small script that enables
using and testing <code>.install</code> files. This is a step toward a better
integration of OPAM with existing build tools, and are experimenting
with ways to ease usage for Coq packages, to generate binary packages,
and to enhance portability.</p>
<h3>Binary Packages for OPAM</h3>
<p>We also started to experiment with binary packages. We developed a
very small tool, <code>ocp-bin</code>, that monitors the compilation of every OPAM
package, takes a snapshot of OPAM files before and after the
compilation and installation, and generates a binary archive for the
package. The next time the package is re-installed, with the same
dependencies, the archive is used instead of compiling the package
again.</p>
<p>For a typical package, the standard OPAM file:</p>
<pre><code>build: [
[ “./configure” “–prefix” “%{prefix}%”]
[ “make]
[ make “install”]
]
remove: [
[ make “uninstall” ]
]
</code></pre>
<p>has to be modified in:</p>
<pre><code>build: [
[ “ocp-bin” “begin” “%{package}%” “%{version}%” “%{compiler}%” “%{prefix}%”
“-opam” “-depends” “%{depends}%” “-hash” “%{hash}%”
“-nodeps” “ocamlfind.” ]
[ “ocp-bin” “–” “./configure” “–prefix” “%{prefix}%”]
[ “ocp-bin” “–” make]
[ “ocp-bin” “–” make “install”]
[ “ocp-bin” “end” ]
]
remove: [
[ “!” “ocp-bin” “uninstall”
“%{package}%” “%{version}%” “%{compiler}%” “%{prefix}%” ]
</code></pre>
<p>Such a transformation would be automated in the future by adding a
field <code>ocp-bin: true</code>. Note that, since <code>ocp-bin</code> takes care of the
deinstallation of the package, it would ensure a complete and correct
deinstallation of all packages.</p>
<p>We also implemented a client-server version of <code>ocp-bin</code>, to be able
to share binary packages between users. The current limitation with
this approach is that many binary packages are not relocatable: if
packages are compiled by Bob to be installed in
<code>/home/bob/.opam/4.01.0</code>, the same packages will only be usable on a
different computer by a user with the same home path! Although it can
still be useful for a user with several computers, we plan to
investigate now on how to build relocatable packages for OCaml.</p>
<h3>Stable Release of ocp-index</h3>
<p>Always looking for a way to provide better tools to the OCaml
programmer, we are happy to announce the first stable release of
<code>ocp-index</code>, which provides quick access to the installed interfaces
and documentation as well as some source-browsing features (lookup
ident definition, search for uses of an ident, etc).</p>
<h2>Profiling Alt-Ergo with <code>ocp-memprof</code>: The Killer App</h2>
<p>One of the most exciting events this month is the use of the
<code>ocp-memprof</code> tool to profile an execution of Alt-Ergo on a big formula
generated by the Cubicle model checker. The story is the following:</p>
<p>The formula was generated from a transition system modeling the FLASH
coherence cache protocol, plus additional information computed by
Cubicle during the verification of FLASH’s safety. It contains a
sub-formula made of nested conjunctions of 999 elements. Its proof
requires reasoning in the combination of the free theory of equality,
enumerated data types and quantifiers. Alt-Ergo was able to discharge
it in only 10 seconds. However, Alain Mebsout — who is doing his Phd
thesis on Cubicle — noticed that Alt-Ergo allocates more than 60 MB
during its execution.</p>
<p>In order to localize the source of this abnormal memory consumption,
we installed the OCaml Memory Profiler runtime, version 4.00.1+memprof
(available in the private OPAM repository of OCamlPro) and compiled
Alt-Ergo using -bin-annot option in order to dump .cmt files. We then
executed the prover on Alain’s example as shown below, without any
instrumentation of Alt-Ergo’s code.</p>
<pre><code class="language-shell-session">$ OCAMLRUNPARAM=m ./alt-ergo.opt formula.mlw
</code></pre>
<p>This execution caused the modified OCaml compiler to dump a snapshot
of the typed heap at every major collection of the GC. The names of
dumped files are of the form
<code>memprof.<PID>.<DUMP-NAME>.<image-number>.dump</code>, where PID is a natural
number that identifies the set of dumped files during a particular
execution.</p>
<p>Dumped files were then fed to the <code>ocp-memprof</code> tool (available in the
TypeRex-Pro toolbox) using the syntax below. The synthesis of this
step (.hp file) was then converted to a .ps file thanks to hp2ps
command. At the end, we obtained the diagram shown in the figure
below.</p>
<pre><code class="language-shell-session">$ ./ocp-memprof -loc -sizes PID
</code></pre>
<p><img src="/blog/assets/img/alt-ergo-memprof-before.png" alt="alt-ergo-memprof-before.png" /></p>
<p>From the figure above, one can extract the following information:</p>
<ul>
<li>
<p>there were 15 major collections of OCaml’s GC during the above execution (the x-axis),</p>
</li>
<li>
<p>Alt-Ergo allocated more than 60 MB during its execution (the y-axis),</p>
</li>
<li>
<p>Some function in file <code>src/preprocess/why_typing.ml</code> is allocating a
lot of data of type <code>Parsed.pp_desc</code> at line 868 (the first square
of the legend).</p>
</li>
</ul>
<p>The third point corresponds to a piece of code used in a recursive
function that performs alpha renaming on parsed formulas to avoid
variable captures. This code is the following:</p>
<pre><code class="language-ocaml">let rec alpha_renaming_b s f =
…
| PPinfix(f1, op, f2) -> (* ‘op’ may be the AND operator *)
let ff1 = alpha_renaming_b s f1 in
let ff2 = alpha_renaming_b s f2 in
PPinfix(ff1, op, ff2) (* line 868 *)
…
</code></pre>
<p>Actually, in 99% there are no capture problems and the function just
reconstructs a new value <code>PPinfix(ff1, op, ff2)</code> that is structurally
equal (=) to its argument <code>f</code>. In case of very big formulas (recall that
Alain’s formula contains a nested conjunction of 999 elements), this
causes Alt-Ergo to allocate a lot.</p>
<p>Fixing this behavior was straightforward. We only had to check whether
recursive calls to alpha renaming function returned modified values
using physical equality <code>==</code>. If not, no renaming is performed and we
safely return the formula given in the argument. This way, the
function will never allocate for formulas without capture issues. For
instance, the piece of code given above is fixed as follows:</p>
<pre><code class="language-ocaml">let rec alpha_renaming_b s f =
…
| PPinfix(f1, op, f2) ->
let ff1 = alpha_renaming_b s f1 in
let ff2 = alpha_renaming_b s f2 in
if ff1 == f1 && ff2 == f2 then f (* no renaming performed by recursive calls ? *)
else PPinfix(ff1, op, ff2)
…
</code></pre>
<p>Once we applied the patch on the hole function alpha_renaming_b,
<code>Alt-Ergo</code> only needed 2 seconds and less than 2.2MB memory to prove our
formula. Profiling an execution of patched version of the prover with
OCaml 4.00.1+memprof and <code>ocp-memprof</code> produced the diagram below. The
difference with the first drawing was really impressive.</p>
<p><img src="/blog/assets/img/alt-ergo-memprof-after.png" alt="alt-ergo-memprof-after.png" /></p>
<h2>Other R&D Projects</h2>
<h3>Scilint, the Scilab Style-Checker</h3>
<p>This month our work on Richelieu was mainly focused on improving
Scilint. After some discussions with Scilab knowledgeable users, we
chose a new set of warnings to implement. Among other things those
warnings analyze primitive fonctions and their arguments as well as
loop variables. Another important thing was to allow SciNotes,
Scilab’s editor, to display our warnings. This has been done by
implementing support for Firehose. Finally some minor bugs were also
fixed.</p>
OPAM 1.1.0 releasedhttps://ocamlpro.com/blog/2013_11_08_opam_1.1.0_released2013-11-08T08:12:13Z2013-11-08T08:12:13Z
Thomas Gazagnaire
After a while staged as RC, we are proud to announce the final release of OPAM 1.1.0! Thanks again to those who have helped testing and fixing the last few issues. Important note The repository format has been improved with incompatible new features; to account for this, the new repository is now ho...<p>After a while staged as RC, we are proud to announce the final release of
<em>OPAM 1.1.0</em>!</p>
<p>Thanks again to those who have helped testing and fixing the last few issues.</p>
<h2>Important note</h2>
<p>The repository format has been improved with incompatible new features; to
account for this, the <em>new</em> repository is now hosted at <a href="https://opam.ocaml.org">opam.ocaml.org</a>,
and the legacy repository at <a href="https://opam.ocamlpro.com">opam.ocamlpro.com</a> is kept to support OPAM
1.0 installations, but is unlikely to benefit from many package updates.
Migration to <a href="https://opam.ocaml.org">opam.ocaml.org</a> will be done automatically as soon as you
upgrade your OPAM version.</p>
<p>You're still free, of course, to use any third-party repositories instead or
in addition.</p>
<h2>Installing</h2>
<p>NOTE: When switching from 1.0, the internal state will need to be upgraded.
THIS PROCESS CANNOT BE REVERTED. We have tried hard to make it fault-
resistant, but failures might happen. In case you have precious data in your
<code>~/.opam</code> folder, it is advised to <strong>backup that folder before you upgrade
to 1.1.0</strong>.</p>
<p>Using the binary installer:</p>
<ul>
<li>download and run <code>https://github.com/ocaml/opam/blob/master/shell/opam_installer.sh</code>
</li>
</ul>
<p>Using the .deb packages from Anil's PPA (binaries are <a href="https://launchpad.net/~avsm/+archive/ppa/+builds?build_state=pending">currently syncing</a>):
add-apt-repository ppa:avsm/ppa
apt-get update
sudo apt-get install opam</p>
<p>For OSX users, the homebrew package will be updated shortly.</p>
<p>or build it from sources at :</p>
<ul>
<li><code>https://github.com/ocaml/opam/releases/tag/1.1.0</code>
</li>
</ul>
<h2>For those who haven't been paying attention</h2>
<p>OPAM is a source-based package manager for OCaml. It supports multiple
simultaneous compiler installations, flexible package constraints, and
a Git-friendly development workflow. OPAM is edited and
maintained by OCamlPro, with continuous support from OCamlLabs and the
community at large (including its main industrial users such as
Jane-Street and Citrix).</p>
<p>The 'official' package repository is now hosted at <a href="https://opam.ocaml.org">opam.ocaml.org</a>,
synchronised with the Git repository at
<a href="https://github.com/ocaml/opam-repository">http://github.com/ocaml/opam-repository</a>, where you can contribute
new packages descriptions. Those are under a CC0 license, a.k.a. public
domain, to ensure they will always belong to the community.</p>
<p>Thanks to all of you who have helped build this repository and made OPAM
such a success.</p>
<h2>Changes</h2>
<p>Too many to list here, see
<a href="https://raw.github.com/OCamlPro/opam/1.1.0/CHANGES">https://raw.github.com/OCamlPro/opam/1.1.0/CHANGES</a></p>
<p>For packagers, some new fields have appeared in the OPAM description format:</p>
<ul>
<li><code>depexts</code> provides facilities for dealing with system (non ocaml) dependencies
</li>
<li><code>messages</code>, <code>post-messages</code> can be used to notify the user eg. of licensing information,
or help her troobleshoot at package installation.
</li>
<li><code>available</code> supersedes <code>ocaml-version</code> and <code>os</code> constraints, and can contain
more expressive formulas
</li>
</ul>
<p>Also, we have integrated the main package repository with Travis, which will
help us to improve the quality of contributions (see <a href="https://anil.recoil.org/2013/09/30/travis-and-ocaml.html">Anil's post</a>).</p>
OCamlPro Highlights, Sept-Oct 2013https://ocamlpro.com/blog/2013_11_01_ocamlpro_highlights_sept_oct_20132013-11-01T08:12:13Z2013-11-01T08:12:13Z
Çagdas Bozman
Here is a short report of our activities in September-October 2013. OCamlPro at OCaml’2013 in Boston We were very happy to participate to OCaml’2013, in Boston. The event was a great success, with a lot of interesting talks and many participants. It was a nice opportunity for us to present some ...<p>Here is a short report of our activities in September-October 2013.</p>
<h3>OCamlPro at OCaml’2013 in Boston</h3>
<p>We were very happy to participate to OCaml’2013, in Boston. The event
was a great success, with a lot of interesting talks and many
participants. It was a nice opportunity for us to present some of our
recent work:</p>
<ul>
<li>Fabrice presented his work on <a href="https://ocaml.org/meetings/ocaml/2013/proposals/wxocaml.pdf">the design of the wxOCaml library</a>. Although the <a href="https://github.com/OCamlPro/ocplib-wxOCaml/">wxOCaml library</a>
itself is an interesting project, the goal of his talk was to show that
binding thousands of functions from a C++ library can be automated very
easily in OCaml, and make the bindings easy to maintain and to improve.
</li>
<li>The work of Thomas and Louis on OPAM was presented in a talk by Anil on the <a href="https://ocaml.org/meetings/ocaml/2013/proposals/platform.pdf">OCaml Platform v0.1</a>.
The OCaml Platform is a set of tools, including OPAM, to provide an
ever increasing set of packages for OCaml developers, including
high-quality documentation and broad portability. Some statistics showed
how OPAM, in less than a year, grew from 200 packages to more than 1400
packages, and from 2-3 contributors to about 130 contributors in
September. Another talk, <a href="https://ocaml.org/meetings/ocaml/2013/proposals/ocamlot.pdf">Ocamlot: OCaml Online Testing</a>
presented how sets of packages will now be automatically tested, to
give immediate feedback to contributors, and an evaluation of packages
quality to users.
</li>
<li>Pierre presented his work on <a href="https://ocaml.org/meetings/ocaml/2013/slides/chambart.pdf">Improving OCaml high level optimisations</a> that he also presented in a recent <a href="/blog/2013_05_24_optimisations_you_shouldnt_do">blog post</a>.
</li>
<li>Grégoire presented his work with Jacques Garrigue on <a href="https://ocaml.org/meetings/ocaml/2013/proposals/runtime-types.pdf">Runtime types in OCaml</a>.
In particular, he showed how abstraction is hard to deal with, as there
is a dilemma between the ability to write powerful polytipic functions
and the preservation of the abstraction wanted by the developer for code
modularity.
</li>
<li>Finally, Çagdas presented his work on <a href="https://ocaml.org/meetings/ocaml/2013/slides/bozman.pdf">Profiling the Memory Usage of OCaml Applications without Changing their Behavior</a>.
This new profiler will be able to provide precise memory information on
production OCaml software, by snapshoting the memory and recovering
type information. It is currently being tested on several projects, such
as the <a href="https://why3.lri.fr/">Why3 verification tool</a>.
</li>
</ul>
<p>Of course, the day was full of interesting talks, and we can only advise to see all of them on the <a href="https://ocaml.org/meetings/ocaml/2013/program.html">complete program</a> that is now online.</p>
<p><a href="http://cufp.org/conference/schedule/2013">CUFP’2013 Program</a>
was also very dense. For OCaml users, Dave Thomas, first keynote,
reminded us how important it is to build two-way bridges between OCaml
and other languages: we have the bad habit to only build one-way bridges
to just use other languages from OCaml, and forget that new users will
have to start by using small OCaml components from their existing
software written in another language. Then, Julien Verlaguet <a href="https://www.youtube.com/watch?v=gKWNjFagR9k">presented</a>
the use of OCaml at Facebook to type-check and compile a typed version
of PhP, HipHop, that is now used for a large part of the code at
Facebook.</p>
<h3>Software Projects</h3>
<p>The period of September-October was also very busy trying to find
some funding for our projects. Fortunately, we still managed to make a
lot of progress in the development of these projects:</p>
<h4>OPAM</h4>
<p>Lots has been going on regarding OPAM, as the 1.1 release is being
pushed forward, with a beta and a RC available already. This release
focuses on stability improvements and bug-fixes, but is nonetheless a
large step from 1.0, with an enhanced update mechanism, extended
metadata, an enhanced ‘pin’ workflow for developers, and much more.</p>
<p>We are delighted by the success met by OPAM, which was mentioned
again and again at the OCaml’2013 workshop, where we got a warming lot
of positive feedback. To be sure that this belongs to the community,
after licensing all metadata of the repository under CC0 (as close to
public domain as legally possible), we have worked hand in hand with
OCamlLabs to migrate it to <a href="https://opam.ocaml.org/">opam.ocaml.org</a>. External repositories for <a href="https://github.com/vouillon/opam-windows-repository">Windows</a>, <a href="https://github.com/vouillon/opam-android-repository">Android</a> <a href="https://github.com/search?q=opam-repo&type=Repositories&ref=searchresults">and so on</a> are appearing, which is a really good thing, too.</p>
<h4>The Alt-Ergo SMT Solver</h4>
<p>In September, we officially announced the distribution and the support of <a href="https://alt-ergo.lri.fr/">Alt-Ergo</a> by OCamlPro and launched its <a href="https://alt-ergo.ocamlpro.com/">new website</a>.
This site allows to download public releases and to discover available
support offerings. We have also published a new public release (version
0.95.2) of the prover. The main changes in this minor release are:
source code reorganization, simplification of quantifier instantiation
heuristics, GUI improvement to reduce latency when opening large files,
as well as various bug fixes.</p>
<p>During September, we also re-implemented and simplified other parts
of Alt-Ergo. In addition, we started the integration of a new SAT-solver
based on miniSAT (implemented as a plug-in) and the development of a
new tool, called Ctrl-Alt-Ergo, that automates the most interesting
strategies of Alt-Ergo. The experiments we made during October are very
encouraging as shown by <a href="/blog/2013_10_02_alt_ergo_ocamlpro_two_months_later">our previous blog post</a>.</p>
<h4>Multi-runtime</h4>
<p>Luca Saiu completed his work at Inria and on the multi-runtime
branch, fixing the last bugs and leaving the code in a shape not too far
removed from permitting its eventual integration into the OCaml
mainline.</p>
<p>Now, the code has a clean configuration-time facility for disabling
the multi-runtime system, and compatibility is restored with
architectures not including the required assembly support to at least
compile and work using a single runtime. A crucial optimization permits
to work in this mode with extremely little overhead with respect to
stock OCaml. Testing on an old PowerPC 32-bit machine revealed a few
minor portability problems related to word size and endianness.</p>
<h4>Compiler optimisations</h4>
<p>We have been working on allowing cross module inlining. We wanted to
be able to show a version generating strictly better code than the
current compiler. This milestone being reached, we are now preparing a
patch series for upstreaming the base parts. We are also working on
polishing the remaining problems: the passes were written in an as
simple as possible way, so compilation time is still a bit high. And
there are a few difficulties remaining with cross module inlining and
packs.</p>
<h3>The INRIA-OCamlPro Lab Team</h3>
<p>The team is also evolving, and some of us are now leaving the team to join other projects:</p>
<ul>
<li>After two years with us, Thomas Gazagnaire has left OCamlPro in October to work most of his time on <a href="http://www.openmirage.org/">Mirage</a>
in Cambridge (UK). Thomas was OCamlPro’s first employee, and OCamlPro
probably wouldn’t exist without him. Thomas has also been the main
architect of OPAM, and was involved in the design of many of our
projects. Louis Gesbert will continue his work on developing and
maintaining OPAM.
</li>
<li>After one year with us, Luca Saiu has left Inria in October. Luca
has made a tremendous work on the implementation of a multicore-OCaml,
where every runtime runs in a different memory space with its own
garbage collector. We hope to be able to upstream his work soon to the
official OCaml distribution.
</li>
<li>After an internship with us, Pierrick Couderc, Souhire Kenawi and
David Maison are back to their masters’ studies since September. Souhire
worked on testing the development of iOS applications on Linux with
OCaml, a very challenging task ! Pierrick and David developed an online
editor for OCaml that we are going to release very soon.
</li>
</ul>
<p>This blog post was about departures, but stay connected, next month,
we are going to announce some newcomers who decided to join the team for
the winter !</p>
OPAM 1.1.0 release candidate outhttps://ocamlpro.com/blog/2013_10_14_opam_1.1.0_release_candidate_out2013-10-14T08:12:13Z2013-10-14T08:12:13Z
Louis Gesbert
OPAM 1.1.0 is ready, and we are shipping a release candidate for packagers and all interested to try it out. This version features several bug-fixes over the September beta release, and quite a few stability and usability improvements. Thanks to all beta-testers who have taken the time to file repor...<p><strong>OPAM 1.1.0 is ready</strong>, and we are shipping a release candidate for
packagers and all interested to try it out.</p>
<p>This version features several bug-fixes over the September beta release, and
quite a few stability and usability improvements. Thanks to all beta-testers
who have taken the time to file reports, and helped a lot tackling the
remaining issues.</p>
<h3>Repository change to opam.ocaml.org</h3>
<p>This release is synchronized with the migration of the main repository from
ocamlpro.com to ocaml.org. A redirection has been put in place, so that all
up-to-date installation of OPAM should be redirected seamlessly.
OPAM 1.0 instances will stay on the old repository, so that they won't be
broken by incompatible package updates.</p>
<p>We are very happy to see the impressive amount of contributions to the OPAM
repository, and this change, together with the licensing of all metadata under
CC0 (almost pubic domain), guarantees that these efforts belong to the
community.</p>
<h2>If you are upgrading from 1.0</h2>
<p>The internal state will need to be upgraded at the first run of OPAM 1.1.0.
THIS PROCESS CANNOT BE REVERTED. We have tried hard to make it fault-
resistant, but failures might happen. In case you have precious data in your
<code>~/.opam folder</code>, it is advised to <strong>backup that folder before you upgrade to 1.1.0</strong>.</p>
<h3>Installing</h3>
<p>Using the binary installer:</p>
<ul>
<li>download and run <code>https://github.com/ocaml/opam/blob/master/shell/opam_installer.sh</code>
</li>
</ul>
<p>You can also get the new version either from Anil's unstable PPA:</p>
<pre><code class="language-shell-session">add-apt-repository ppa:avsm/ppa-testing
apt-get update
sudo apt-get install opam
</code></pre>
<p>or build it from sources at :</p>
<ul>
<li><code>https://github.com/OCamlPro/opam/releases/tag/1.1.0-RC</code>
</li>
</ul>
<h3>Changes</h3>
<p>Too many to list here, see
<a href="https://raw.github.com/OCamlPro/opam/1.1.0-RC/CHANGES">https://raw.github.com/OCamlPro/opam/1.1.0-RC/CHANGES</a></p>
<p>For packagers, some new fields have appeared in the OPAM description format:</p>
<ul>
<li><code>depexts</code> provides facilities for dealing with system (non ocaml)
dependencies
</li>
<li><code>messages</code>, <code>post-messages</code> can be used to notify the user or help her troubleshoot at package installation.
</li>
<li><code>available</code> supersedes <code>ocaml-version</code> and <code>os</code> constraints, and can contain
more expressive formulas
</li>
</ul>
Alt-Ergo @ OCamlPro: Two months laterhttps://ocamlpro.com/blog/2013_10_02_alt_ergo_ocamlpro_two_months_later2013-10-02T08:12:13Z2013-10-02T08:12:13Z
Mohamed Iguernlala
As announced in a previous post, I joined OCamlPro at the beginning of September and I started working on Alt-Ergo. Here is a report presenting the tool and the work we have done during the two last months. Alt-Ergo at a Glance Alt-Ergo is an open source automatic theorem prover based on SMT technol...<p>As announced in <a href="/blog/2013_09_04_ocamlpro_highlights_august_2013">a previous post</a>, I joined OCamlPro at the beginning of September and I started working on Alt-Ergo. Here is a report presenting the tool and the work we have done during the two last months.</p>
<h3>Alt-Ergo at a Glance</h3>
<p><a href="https://alt-ergo.ocamlpro.com">Alt-Ergo</a> is an open source automatic theorem prover based on <a href="https://en.wikipedia.org/wiki/Satisfiability_Modulo_Theories">SMT</a> technology. It is developed at <a href="https://www.lri.fr">Laboratoire de Recherche en Informatique</a>, <a href="https://www.inria.fr/centre/saclay">Inria Saclay Ile-de-France</a> and <a href="https://www.cnrs.fr/index.php">CNRS</a> since 2006. It is capable of reasoning in a combination of several built-in theories such as uninterpreted equality, integer and rational arithmetic, arrays, records, enumerated data types and AC symbols. It also handles quantified formulas and has a polymorphic first-order native input language. Alt-Ergo is written in <a href="https://caml.inria.fr/ocaml/index.fr.html">OCaml</a>. Its core has been formally proved in the <a href="https://coq.inria.fr">Coq proof assistant</a>.</p>
<p>Alt-Ergo has been involved in a qualification process (DO-178C) by <a href="http://www.airbus.com">Airbus Industrie</a>. During this process, a qualification kit has been produced. It was composed of a technical document with tool requirements (TR) that gives a precise description of each part of the prover, a companion document (~ 450 pages) of tests, and an instrumented version of the tool with a TR trace mechanism.</p>
<h3>Alt-Ergo Spider Web</h3>
<p>Alt-Ergo is mainly used to prove the validity of mathematical formulas generated by program verification platforms. It was originally designed and tuned to prove formulas generated by the <a href="https://why.lri.fr">Why tool</a>. Now, it is used by different tools and in various contexts, in particular via the <a href="https://why3.lri.fr">Why3 platform</a>. As shown by the diagram below, Alt-Ergo is used to prove formulas:</p>
<ul>
<li>generated from Ada code by SPARK 2005 and <a href="https://www.spark-2014.org">SPARK 2014</a>,
</li>
<li>generated from C programs by <a href="https://frama-c.com">Frama-C</a> and <a href="https://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1028953&tag=1">CAVEAT</a>,
</li>
<li>produced from WhyML programs by <a href="https://why3.lri.fr">Why3</a>,
</li>
<li>translated from proof obligations generated by <a href="https://www.atelierb.eu">Atelier-B</a>,
</li>
</ul>
<p>Moreover, Alt-Ergo is used in the context of cryptographic protocols verification by <a href="https://www.easycrypt.info">EasyCrypt</a> and in SMT-based model checking by <a href="https://cubicle.lri.fr">Cubicle</a>.</p>
<p><img src="/blog/assets/img/graph_ae_spider.png" alt="ae-spider" /></p>
<h3>Some "Hello World" Examples</h3>
<p>Below are some basic formulas written in the why input syntax. Each example is proved valid by Alt-Ergo. The first formulas is very simple and is proved with a straightforward arithmetic reasoning. <code>goal g2</code> requires reasoning in the combination of functional arrays and linear arithmetic, etc. The last example contains a quantified sub-formula with a polymorphic variable <code>x</code>. Generating four ground instances of this axiom where <code>x</code> is replaced by <code>1</code>, <code>true</code>, <code>1.4</code> and <code>a</code> respectively is necessary to prove <code>goal g5</code>.</p>
<p>** Simple arithmetic operation **</p>
<pre><code class="language-ocaml">goal g1 : 1 + 2 = 3
</code></pre>
<p>** Theories of functional arrays and linear integer arithmetic **</p>
<pre><code class="language-ocaml">logic a : (int, int) farray
goal g2 : forall i:int. i = 6 -> a[i<-4][5] = a[i-1]
</code></pre>
<p>** Theories of records and linear integer arithmetic **</p>
<pre><code class="language-ocaml">type my_record = { a : int ; b : int }
goal g3 : forall v,w : my_record. 2 * v.a = 10 -> { v with b = 5} = w -> w.a = 5
</code></pre>
<p>** Theories of enumerated data types and uninterpreted equality **</p>
<pre><code class="language-ocaml">type my_sum = A | B | C
logic P : 'a -> prop
goal g4 : forall x : my_sum. P(C) -> x<>A and x<>B -> P(x)
</code></pre>
<p>** Formula with quantifiers and polymorphism **</p>
<pre><code class="language-ocaml">axiom a: forall x : 'a. P(x)
goal g5 : P(1) and P(true) and P(1.4) and P(a)
</code></pre>
<p>** formula with quantifiers and polymorphism **</p>
<pre><code class="language-shell-session">$$ alt-ergo examples.why
File "examples.why", line 2, characters 1-21:Valid (0.0120) (0)
File "examples.why", line 6, characters 1-53:Valid (0.0000) (1)
File "examples.why", line 10, characters 1-81:Valid (0.0000) (3)
File "examples.why", line 15, characters 1-59:Valid (0.0000) (6)
File "examples.why", line 19, characters 1-47:Valid (0.0000) (10)
</code></pre>
<h3>Alt-Ergo @ OCamlPro</h3>
<p>On September 20, we officially announced the distribution and the support of Alt-Ergo by OCamlPro and launched its <a href="https://alt-ergo.ocamlpro.com">new website</a>. This site allows to download public releases of the prover and to discover available support offerings. It'll be enriched with additional content progressively. The former Alt-Ergo's web page hosted by LRI is now devoted to theoretical foundations and academic aspects of the solver.</p>
<p>We have also published a new public release (version 0.95.2) of Alt-Ergo. The main changes in this minor release are: source code reorganization into sub-directories, simplification of quantifiers instantiation heuristics, GUI improvement to reduce latency when opening large files, as well as various bug fixes.</p>
<p>In addition to the re-implementation and the simplification of some parts of the prover (e.g. internal literals representation, theories combination architecture, ...), the main novelties of the current master branch of Alt-Ergo are the following:</p>
<ul>
<li>The user can now specify an external (plug-in) SAT-solver instead of the default DFS-based engine. We experimentally provide a CDCL solver based on miniSAT that can be plugged to perform satisfiability reasoning. This solver is more efficient when formulas contain a rich propositional structure.
</li>
<li>We started the development of a new tool, called Ctrl-Alt-Ergo, in which we put our expertise by implementing the most interesting strategies of Alt-Ergo. The experiments we made with our internal benchmarks are very promising, as shown below.
</li>
</ul>
<h3>Experimental Evaluation</h3>
<p>We compared the performances of latest public releases of Alt-Ergo with the current master branch of both Alt-Ergo and Ctrl-Alt-Ergo (commit <code>ce0bba61a1fd234b85715ea2c96078121c913602</code>) on our internal test suite composed of 16209 formulas. Timeout was set to 60 seconds and memory was limited to 2GB per formula. Benchmarks descriptions and the results of our evaluation are given below.</p>
<h4>Why3 Benchmark</h4>
<p>This benchmark contains 2470 formulas generated from Why3's gallery of WhyML programs. Some of these formulas are out of scope of current SMT solvers. For instance, the proof of some of them requires inductive reasoning.</p>
<h4>SPARK Hi-lite Benchmark</h4>
<p>This benchmark is composed of 3167 formulas generated from Ada programs used during Hi-lite project. It is known that some formulas are not valid.</p>
<h4>BWare Benchmark</h4>
<p>This test-suite contains 10572 formulas translated from proof obligations generated by Atelier-B. These proof obligations are issued from industrial B projects and are proved valid.</p>
<table class="tableau2">
<thead>
<tr>
<td></td>
<th>Alt-Ergo<br />
version 0.95.1</th>
<th>Alt-Ergo<br />
version 0.95.2</th>
<th>Alt-Ergo<br />
master branch*</th>
<th>Ctrl-Alt-Ergo<br />
master branch*</th>
</tr>
</thead>
<tbody>
<tr>
<th>Release date</th>
<td>Mar. 05, 2013</td>
<td>Sep. 20, 2013</td>
<td>– – –</td>
<td>– – –</td>
</tr>
<tr>
<th>Why3 benchmark</th>
<td>2270<br />
(91.90 %)</td>
<td>2288<br />
(92.63 %)</td>
<td>2308<br />
(93.44 %)</td>
<td>2363<br />
(95.67 %)</td>
</tr>
<tr>
<th>SPARK benchmark</th>
<td>2351<br />
(74.23 %)</td>
<td>2360<br />
(74.52 %)</td>
<td>2373<br />
(74.93 %)</td>
<td>2404<br />
(75.91 %)</td>
</tr>
<tr>
<th>BWare benchmark</th>
<td>5609<br />
(53.05 %)</td>
<td>9437<br />
(89.26 %)</td>
<td>10072<br />
(95.27 %)</td>
<td>10373<br />
(98.12 %)</td>
</tr>
</tbody>
</table>
(*) commit `ce0bba61a1fd234b85715ea2c96078121c913602`
OPAM 1.1.0 beta releasedhttps://ocamlpro.com/blog/2013_09_20_opam_1.1.0_beta_released2013-09-20T08:12:13Z2013-09-20T08:12:13Z
Thomas Gazagnaire
We are very happy to announce the beta release of OPAM version 1.1.0! OPAM is a source-based package manager for OCaml. It supports multiple simultaneous compiler installations, flexible package constraints, and a Git-friendly development workflow which. OPAM is edited and maintained by OCamlPro, wi...<p>We are very happy to announce the <strong>beta release</strong> of OPAM version 1.1.0!</p>
<p>OPAM is a source-based package manager for OCaml. It supports multiple
simultaneous compiler installations, flexible package constraints, and
a Git-friendly development workflow which. OPAM is edited and
maintained by OCamlPro, with continuous support from OCamlLabs and the
community at large (including its main industrial users such as
Jane-Street and Citrix).</p>
<p>Since its first official release <a href="/blog/2013_03_15_opam_1.0.0_released">last March</a>, we have fixed many
bugs and added lots of <a href="https://github.com/OCamlPro/opam/issues?milestone=17&page=1&state=closed">new features and stability improvements</a>. New
features go from more metadata to the package and compiler
descriptions, to improved package pin workflow, through a much faster
update algorithm. The full changeset is included below.</p>
<p>We are also delighted to see the growing number of contributions from
the community to both OPAM itself (35 contributors) and to its
metadata repository (100+ contributors, 500+ unique packages, 1500+
packages). It is really great to also see alternative metadata
repositories appearing in the wild (see for instance the repositories
for <a href="https://github.com/vouillon/opam-android-repository">Android</a>, <a href="https://github.com/vouillon/opam-windows-repository">Windows</a> and <a href="https://github.com/search?q=opam-repo&type=Repositories&ref=searchresults">so on</a>). To be sure that the
community efforts will continue to benefit to everyone and to
underline our committment to OPAM, we are rehousing it at
<code>http://opam.ocaml.org</code> and switching the license to CC0 (see <a href="https://github.com/OCamlPro/opam-repository/issues/955">issue #955</a>,
where 85 people are commenting on the thread).</p>
<p>The binary installer has been updated for OSX and x86_64:</p>
<ul>
<li><code>https://github.com/ocaml/opam/blob/master/shell/opam_installer.sh</code>
</li>
</ul>
<p>You can also get the new version either from Anil's unstable PPA:
add-apt-repository ppa:avsm/ppa-testing
apt-get update
sudo apt-get install opam</p>
<p>or build it from sources at :</p>
<ul>
<li><code>https://github.com/OCamlPro/opam/releases/tag/1.1.0-beta</code>
</li>
</ul>
<p>NOTE: If you upgrade from OPAM 1.0, the first time you will run the
new <code>opam</code> binary it will upgrade its internal state in an incompatible
way: THIS PROCESS CANNOT BE REVERTED. We have tried hard to make this
process fault-resistant, but failures might happen. In case you have
precious data in your <code>~/.opam</code> folder, it is advised to <strong>backup that
folder before you upgrade to 1.1</strong>.</p>
<h2>Changes</h2>
<ul>
<li>Automatic backup before any operation which might alter the list of installed packages
</li>
<li>Support for arbitrary sub-directories for metadata repositories
</li>
<li>Lots of colors
</li>
<li>New option <code>opam update -u</code> equivalent to <code>opam update && opam upgrade --yes</code>
</li>
<li>New <code>opam-admin</code> tool, bundling the features of <code>opam-mk-repo</code> and
<code>opam-repo-check</code> + new 'opam-admin stats' tool
</li>
<li>New <code>available</code>: field in opam files, superseding <code>ocaml-version</code> and <code>os</code> fields
</li>
<li>Package names specified on the command-line are now understood
case-insensitively (#705)
</li>
<li>Fixed parsing of malformed opam files (#696)
</li>
<li>Fixed recompilation of a package when uninstalling its optional dependencies (#692)
</li>
<li>Added conditional post-messages support, to help users when a package fails to
install for a known reason (#662)
</li>
<li>Rewrite the code which updates pin et dev packages to be quicker and more reliable
</li>
<li>Add {opam,url,desc,files/} overlay for all packages
</li>
<li><code>opam config env</code> now detects the current shell and outputs a sensible default if
no override is provided.
</li>
<li>Improve <code>opam pin</code> stability and start display information about dev revisions
</li>
<li>Add a new <code>man</code> field in <code>.install</code> files
</li>
<li>Support hierarchical installation in <code>.install</code> files
</li>
<li>Add a new <code>stublibs</code> field in <code>.install</code> files
</li>
<li>OPAM works even when the current directory has been deleted
</li>
<li>speed-up invocation of <code>opam config var VARIABLE</code> when variable is simple
(eg. <code>prefix</code>, <code>lib</code>, ...)
</li>
<li><code>opam list</code> now display only the installed packages. Use <code>opam list -a</code> to get
the previous behavior.
</li>
<li>Inverse the depext tag selection (useful for <code>ocamlot</code>)
</li>
<li>Add a <code>--sexp</code> option to <code>opam config env</code> to load the configuration under emacs
</li>
<li>Purge <code>~/.opam/log</code> on each invocation of OPAM
</li>
<li>System compiler with versions such as <code>version+patches</code> are now handled as if this
was simply <code>version</code>
</li>
<li>New <code>OpamVCS</code> functor to generate OPAM backends
</li>
<li>More efficient <code>opam update</code>
</li>
<li>Switch license to LGPL with linking exception
</li>
<li><code>opam search</code> now also searches through the tags
</li>
<li>minor API changes for <code>API.list</code> and <code>API.SWITCH.list</code>
</li>
<li>Improve the syntax of filters
</li>
<li>Add a <code>messages</code> field
</li>
<li>Add a <code>--jobs</code> command line option and add <code>%{jobs}%</code> to be used in OPAM files
</li>
<li>Various improvements in the solver heuristics
</li>
<li>By default, turn-on checking of certificates for downloaded dependency archives
</li>
<li>Check the md5sum of downloaded archives when compiling OPAM
</li>
<li>Improved <code>opam info</code> command (more information, non-zero error code when no patterns match)
</li>
<li>Display OS and OPAM version on internal errors to ease error reporting
</li>
<li>Fix <code>opam reinstall</code> when reinstalling a package wich is a dependency of installed packages
</li>
<li>Export and read <code>OPAMSWITCH</code> to be able to call OPAM in different switches
</li>
<li><code>opam-client</code> can now be used in a toplevel
</li>
<li><code>-n</code> now means <code>--no-setup</code> and not <code>--no-checksums</code> anymore
</li>
<li>Fix support of FreeBSD
</li>
<li>Fix installation of local compilers with local paths endings with <code>../ocaml/</code>
</li>
<li>Fix the contents of <code>~/.opam/opam-init/variable.sh</code> after a switch
</li>
</ul>
OCamlPro Highlights, August 2013https://ocamlpro.com/blog/2013_09_04_ocamlpro_highlights_august_20132013-09-04T08:12:13Z2013-09-04T08:12:13Z
Çagdas Bozman
Here is a short report on the different projects we have been working on in August. News from OCamlPro Compiler Optimizations After our reports on better inlining have raised big expectations, we have been working hard on fixing the few remaining bugs. An enhanced alias/constant analysis was added, ...<p>Here is a short report on the different projects we have been working on in August.</p>
<h3>News from OCamlPro</h3>
<h4>Compiler Optimizations</h4>
<p>After our reports on <a href="/blog/2013_07_11_better_inlining_progress_report">better inlining</a>
have raised big expectations, we have been working hard on fixing the
few remaining bugs. An enhanced alias/constant analysis was added, to
provide the information needed to lift some constraints on the
maintained invariants, and simplifying some other passes quite a lot in
the process. We are now working on reestablishing cross-module inlining,
by exporting the new information between compilation units.</p>
<h4>Memory Profiling</h4>
<p>On the memory profiling front, now that the compiler patch is well
tested and quite stable, we started some cleanup to make it more
modular, easier to understand and extend. We also worked on improving
the performance of the profiler (the tool that analyzes the heap
snapshots), by caching some expensive computations, such as extracting
type information from ‘cmt’ files associated with each location, in
files that are shared between executions. We have started testing the
profiler on <a href="https://why3.lri.fr/">the Why3 verification platform</a>, and these optimizations proved very useful to analyze longer traces.</p>
<h4>OPAM Package Manager</h4>
<p>On OPAM, we are still preparing the release of version 1.1. The
release date has shifted a little bit — it is now planned to happen
mid-September, before the <a href="https://ocaml.org/meetings/ocaml/2013/">OCaml’2013 meeting</a> — because we are focusing on getting speed and stability improvements in a very good shape. We are now relying on <a href="https://github.com/OCamlPro/opam-rt">opam-rt</a>, our new regression testing infrastructure, to be sure to get the best level of quality for the release.</p>
<p>Regarding the package and compiler <a href="https://github.com/OCamlPro/opam-repository">metadata</a>,
we are very proud to announce that our community has crossed an
important line, with more than 100 contributors and 500 different
packages ! In order to ensure that these hours of packaging efforts
continue to benefit everyone in the OCaml community in the future, we
are (i) clarifying the license for all the metadata in the package
repository to use <a href="https://github.com/OCamlPro/opam-repository/issues/955">CC0</a> and (ii) discussing with <a href="https://www.cl.cam.ac.uk/projects/ocamllabs/">OCamlLabs</a> and the different stakeholders to migrate all the metadata to the <a href="https://ocaml.org/">ocaml.org</a> infrastructure.</p>
<h4>Simple Build Manager</h4>
<p>We also made progress on the design of our simple build-manager for OCaml, <a href="https://www.typerex.org/ocp-build.html">ocp-build</a>. The <a href="https://github.com/OCamlPro/ocp-build/tree/next">next branch in the GIT repository</a>
features a new, much more expressive package description language :
ocp-build can now be used to build arbitrary files, for example to
generate new source files, or to compile files in other languages than
OCaml. We successfully used the new language to build <a href="https://try.ocamlpro.com/">Try-ocaml</a> and <a href="https://www.typerex.org/ocplib-wxOCaml.html">wxOCaml</a>, completely avoiding the use of “make”.</p>
<p>It can also automatically generate <a href="https://www.typerex.org/ocplib-wxOCaml/doc.0.1/index.html">basic HTML documentation</a>
for libraries using ocamldoc with “ocp-build make -doc”. There are
still some improvements on our TODO list before an official release, for
example improving the support of META files, but we are getting very
close ! ocp-build is very efficient: compiling <a href="https://www.typerex.org/ocp-build/merlin.ocp">Merlin with ocp-build</a> takes only 4s on a quad-core while ocamlbuild needs 13s in similar conditions and with the same parallelisation settings.</p>
<h4>Graphics on Try-OCaml</h4>
<p><a href="https://try.ocamlpro.com/">Try-OCaml</a> has been improved
with a dedicated implementation of the Graphics module: type “lesson
19”, and you will get some fun examples, including a simple game by
Sylvain Conchon.</p>
<h4>Alt-Ergo Theorem Prover</h4>
<p>We are also happy to welcome Mohamed Iguernelala in the team,
starting at the beginning of September. Mohamed is a great OCaml
programmer, and he will be working on the Alt-Ergo theorem prover, an
SMT-solver in OCaml developed by Sylvain Conchon, and heavily used in
the industry for safety-critical software (aircrafts, trains, etc.).</p>
<h3>News from the INRIA-OCamlPro Lab</h3>
<h4>Multi-runtime OCaml</h4>
<p>After thorough testing, the <a href="https://github.com/lucasaiu/ocaml">multi-runtime branch</a>
is getting stable enough for being submitted upstream. The build system
has been fixed to enable the modified OCaml to run, in single-runtime
mode, on architectures for which no multi-runtime port exists yet, while
maintaining API compatibility with mainline OCaml. Thanks to some
clever preprocessor hacks, the performance impact in single-runtime mode
will be negligible.</p>
<h4>Whole-Program Analysis</h4>
<p>Our work on <a href="https://github.com/thomasblanc/ocaml-data-analysis/">whole program analysis</a>,
while still in the early stages, is quickly getting forward, and we
managed to generate well-formed graphs representing a whole OCaml
program. The tool can be fed sources and .cmt files, and at each point
of the program, will compute all of the plausible values every variable
can take, plus the calculations that allowed to get those values. We
hope to have it ready for testing the detection of uncaught exceptions
soon.</p>
<h4>Editing OCaml Online</h4>
<p>We also made a lot of progress in our Online IDE for OCaml, with code
generation within the browser. The prototype is now quite robust, and
some tricky bugs with the representation of integers and floats in
Javascript have been fixed, so that the generated code is always the
same as the one generated by a standalone compiler. Also, the interface
now allows the user to have a full hierarchy of files and projects in
his workspace. There is still some work to be done on improving the
design, but we are very exited with the possibility to develop in OCaml
without installing anything on the computer !</p>
<h4>Scilab Code Analysis</h4>
<p>For the <a href="https://www.richelieu.pro/">Richelieu</a> project,
after testing some type inference analysis on Scilab code in the last
months, we have now started to implement a new tool, <a href="https://github.com/OCamlPro/scilint">Scilint</a>,
to perform some of this analysis on whole Scilab projects and report
warnings on suspect code. We hope this tool will soon be used by every
Scilab user, to avoid wasting hours of computation before reaching an
easy-to-catch error, such as a misspelled — thus undefined — variable.</p>
<h3>Meeting with the Community</h3>
<p>Some of us are going to present part of this work at <a href="https://ocaml.org/meetings/ocaml/2013/program.html">OCaml’2013</a>,
the OCaml Users and Developers Workshop in Boston. We expect it to be a
good opportunity to get some feedback on these projects from the
community!</p>
News from July https://ocamlpro.com/blog/2013_08_05_news_from_july2013-08-05T08:12:13Z2013-08-05T08:12:13Z
Çagdas Bozman
Once again, here is the summary of our activities for last month. The highlight this month is the release of ocaml-top, an interactive editor for education which works well under Windows and that we hope professors all around the world will use to teach OCaml to their students. We are also continuyi...<p>Once again, here is the summary of our activities for last month. The highlight this month is the release of <a href="http://www.typerex.org/ocaml-top.html">ocaml-top</a>, an interactive editor for education which works well under Windows and that we hope professors all around the world will use to teach OCaml to their students. We are also continuying our work on the improvement of the performance of OCaml, with new inlining heuristics in the compiler and adding multicore support to the runtime.</p>
<h2>Compiler updates</h2>
<p>Last month, we started to get very nice results with our compiler performance improvements. First, Pierre Chambart polished the prototype implementation of his new <code>flamba</code> intermediate language and he started to get impressive <a href="http://ocamlpro.com/2013/07/11/better-inlining-progress-report/">micro-benchmarks results</a>, with around 20% – 30% improvements on code using exceptions or functors. Following a discussion with our industrial users, he is currently focusing on improving the compilation of local recursive functions such as the very typical:</p>
<pre><code class="language-ocaml">let f x y =
let rec loop v =
… x …
loop z
in
loop x
</code></pre>
<p>A simple and reasonably efficient solutions is to eta-expand the auxiliary function, i.e. add an intermediate function calling the loop with all closure parameters passed as variables. The hard part is to then to add the right arguments to all the call sites: luckily enough the new inlining engine already does that kind of analysis so it can be re-used here. This means that these constructs will be compiled efficiently by the new inlining heuristics.</p>
<p>Second, Luca Saiu has finished debugging the native thread support on top of his <a href="https://github.com/lucasaiu/ocaml">multi-runtime variant of OCaml</a>, which has become quite usable and is pretty robust now. He has tentatively started adding support for <code>vmthreads</code> as well, concurrently cleaning up context finalization and solving other minor issues, such as configuration scripts for architectures that do not support the multi-runtime features yet. Then, after writing documentation and running a full pass over the sources to remove debugging stubs and prints which pollute the code after months of low-level experimentation, he is going to prepare patches for discussion and submission to the main OCaml compiler.</p>
<p>Çagdas Bozman continued to improve the implementation of his <a href="https://github.com/cago/ocaml">profiling tools</a> for both native and byte-code programs. A great output of his recent work is that the location information is much more precise: with very different techniques for native and byte code, the program locations are now uniquely identified. The usability was improved as well, as the profiling location tables are now embedded directly into the programs. He also improved the post-mortem profiling tools to re-type dumped heaps, which also leads to much more accurate profiling information. Çagdas is now actively using these tools on <a href="http://why3.lri.fr/">why3</a> and he expects to get feedback results very soon to continue to improve his tools.</p>
<p>Finally, Thomas Blanc is still working on whole program analysis for his internship, in order to find possibly uncaught exceptions. The project is moving quite well and the month was spent on analyzing the lambda intermediate representation of the compilation step. With the help of Pierre Chambart, he is working on a <a href="https://github.com/thomasblanc/ocaml-data-analysis">0-CFA library</a> that should allow to compute the “possible” values for each variable at each point of the program. The idea is to make a directed hypergraph with each hyperedge representing an instruction and each vertex being a state of the program. Then search a fixpoint in the possible values propagated through the graph. This allows the compiler to know everywhere in the program what possible values may be returned or what possible exceptions may be raised. In order to create a well-designed graph, it is needed to create a new intermediate representation that looks like Lambda except (mainly) that every expression gets an identifier. The next step is to specify a hypergraph construction for each primitive and control-flow.</p>
<h2>Development Tools</h2>
<h3>Editors</h3>
<p>This month, Louis Gesbert has been busy making the first release of <a href="http://www.typerex.org/ocaml-top.html">ocaml-top</a>, the simple graphical top-level for beginners and students. Together with the web-IDE, this project aims at lowering the entry barrier to OCaml. Ocaml-top features a clean and easy to access interface, with nonetheless advanced features like automatic semantic indentation, error marking, and integrated display of standard library types — using the engines of <a href="https://github.com/OCamlPro/ocp-indent">ocp-indent</a> and <a href="https://github.com/OCamlPro/ocp-index">ocp-index</a> of course. The biggest challenge was probably to make everything work as expected on Microsoft Windows, which was required for most beginners and classrooms.</p>
<p><img src="/blog/assets/img/ocaml_top.png" alt="ocaml-top" /></p>
<p>The two main issues were:</p>
<ul>
<li>Setup the build environment: there are several versions of OCaml for Windows ; we generally want to avoid any dependency on cygwin on the generated program, but it’s very hard to avoid any need for it in the build chain. The easiest solution at the moment is to “cross-compile” from cygwin using the mingw32 gcc compiler. The hard part is to get all the needed libraries properly setup: this felt a lot like Linux 15 years ago, you can find some binaries but generally not properly configured, and there is no consistent packaging system (or at least you can’t find what you want in it).
</li>
<li>Process management: ocaml-top runs the OCaml toplevel as a sub-process, so as not to be inpaired by any problem in the user program. Interacting with that process in a portable way is close to impossible, Windows having no POSIX signals, and read/write operations being very different in terms of blocking, etc. Some obscure C bindings were required to simulate a SIGINT that could tell the ocaml process to stop the current computation and return to the prompt. But at this cost, ocaml-top can be run with any existing external OCaml toplevel.
</li>
</ul>
<p>Not mentioning some gtk/lablgtk bugs that were often OS-specific. After having read <a href="http://gallium.inria.fr/%7Escherer/gagallium/the-ocaml-installer-for-windows/">horror stories</a> about the most commonly used “Windows installer generator” <a href="http://nsis.sourceforge.net/Main_Page">NSIS</a>, Louis opted for the Microsoft open source solution <a href="http://wixtoolset.org/">WiX</a> which turned out to be quite clean and efficient, although using a lot of XML. The only point that might be in favor of NSIS is that it can generate the installer from Linux, so it’s much convenient when you cross-compile, which is not the case here ; also worth mentioning, Xen and LVM are really great tools which do save a lot of time when working and testing between two (or more) different OSes.</p>
<p>Always on the editor front, David and Pierrick have been working on a web-IDE for OCaml since the beginning of their internship two months ago. For now, the IDE includes <a href="http://ace.c9.io/">Ace</a>, an editor, plugged with some features specific for OCaml, particularly <a href="https://github.com/OCamlPro/ocp-indent">ocp-indent</a>, made possible by using <a href="http://ocsigen.org/js_of_ocaml/">js_of_ocaml</a> which compiles bytecode to Javascript. It also includes a basic project manager that uses a server to store files for each user. Authentication is done by using <a href="http://www.mozilla.org/en-US/persona/">Mozilla’s Persona</a>. One particularly nice feature they are working on is <em>client-side</em> bytecode generation: this means users can ask their browser to produce the byte-code of the project they are working on <em>without any connection to the server</em> ! Beware that this is still work-in-progress and the feature is not bug-free for the moment. The project (undocumented for now) is available on <a href="https://github.com/pcouderc/ocp-webedit">Github</a>.</p>
<h3>Tools</h3>
<p>Meanwhile, most of my time last month has been spent preparing the next release of OPAM, with the help of Louis Gesbert. This new release will contain a <a href="https://github.com/OCamlPro/opam/issues?milestone=17&page=1&state=closed">lot of bug-fixes</a> and an improved <code>opam update</code> mechanism: it will be much more flexible, faster and more stable than the one present in <code>1.0</code>. Few months ago, I had already pushed a <a href="https://github.com/OCamlPro/opam/pull/597">first batch of patches</a> to the development version, which started to make things look much better. I decided last month to continue improving that feature and make it rock-solid: hence I have started a <a href="https://github.com/samoht/opam-rt">regression testing platform for OPAM</a> which is still young but already damn useful to stabilize my new <a href="https://github.com/OCamlPro/opam/pull/719">set of patches</a>. <code>opam-rt</code> is also written in OCaml: it generates random repositories with random packages, shuffles everything around and checks that OPAM correctly picks-up the changes. In the future this will make it easier to test complex OPAM scenarios and will hopefully lead to a better OPAM.</p>
<p><a href="https://github.com/OCamlPro/ocp-index">ocp-index</a> has seen some progress, with lots of rough edges rounded, and much better performance on big <code>cmi</code> files (typically module packs, like <code>Core.Std</code>). While more advanced functionality is being planned, it is already very helpful, and problems seen in earlier development versions have been fixed. The upcoming release also greatly improves the experience from emacs, and might become the first “stable”. The flow of bugs reported on <a href="https://github.com/OCamlPro/ocp-index">ocp-indent</a> is drying up, which means the tool is gaining some maturity. Not much visible changes for the past month except for a few bug-fixes, but the library interface has been completely rewritten to offer much more flexibility while being more friendly. This has allowed it to be plugged in the Web-IDE (see above), which being executed in JavaScript has much tighter performance constraints — the indent engine is only re-run where required after changes — ; and in ocaml-top, where it is also used to detect top-level phrase bounds.</p>
<h2>Community</h2>
<p>We are proud to be well represented at the <a href="http://ocaml.org/meetings/ocaml/2013/program.html">OCaml Developer Workshop 2013</a>. This year it happens in Boston, in September, co-located with the <a href="http://cufp.org/conference/schedule/2013">Conference of Users of Functional Programming</a>. Both conferences will contains a lot of OCaml-related talks: I am especially excited to hear about <a href="http://cufp.org/conference/schedule/2013">PHP type-inference efforts</a> at Facebook using OCaml! If you are in the area around the 22/23 and 24 of September and you want to chat about OCamlPro and OCaml, we will be around!</p>
Better Inlining: Progress Reporthttps://ocamlpro.com/blog/2013_07_11_better_inlining_progress_report2013-07-11T08:12:13Z2013-07-11T08:12:13Z
Chambart
As announced some time ago, I am working on a new intermediate language within the OCaml compiler to improve its inlining strategy. After some time of bug squashing, I prepared a testable version of the patchset, available either on Github (branch flambda_experiments), or through OPAM, in the follow...<p>As announced <a href="optimisations-you-shouldnt-do">some time ago</a>, I am working on a new intermediate language within the OCaml compiler to improve its inlining strategy. After some time of bug squashing, I prepared a testable version of the patchset, available either on <a href="https://github.com/chambart/ocaml.git">Github</a> (branch <code>flambda_experiments</code>), or through OPAM, in the following repository:</p>
<pre><code class="language-shell-session">opam repo add inlining https://github.com/OCamlPro/opam-compilers-repository.git
opam switch flambda
opam install inlining-benchs
</code></pre>
<p>The series of patches is not ready for benchmarking against real applications, as no cross module information is propagated yet (this is more practical for development because it simplifies debugging a lot), but it already works quite well on single-file code. Some very simple benchmark examples are available in the <code>inlining-benchs</code> package.</p>
<p>The series of patches implements a set of 'reasonable' compilation passes, that do not try anything too complicated, but combined, generates quite efficient code.</p>
<h2>Current Status</h2>
<p>As said in the previous post, I decided to design a new intermediate language to implement better inlining heuristics in the compiler. This intermediate language, called <code>flambda</code>, lies between the <code>lambda</code> code and the <code>Clambda</code> code. It has an explicit representation of closures, making them easier to manipulate, and modules do not appear in it anymore (they have already been compiled to static structures).</p>
<p>I then started to implement new inlining heuristics as functions from the <code>lambda</code> code to the <code>flambda</code> code. The following features are already present:</p>
<ul>
<li>intra function value analysis
</li>
<li>variable rebinding
</li>
<li>dead code elimination (which needs purity analysis)
</li>
<li>known match / if branch elimination
</li>
</ul>
<p>In more detail, the chosen strategy is divided into two passes, which can be described by the following pseudo-code:</p>
<pre><code>if function is at toplevel
then if applied to at least one constant OR small enough
then inline
else if applied to at least one constant AND small enough
then inline
</code></pre>
<pre><code>if function is small enough
AND does not contain local function declarations
then inline
</code></pre>
<p>The first pass eliminates most functor applications and functions of the kind:</p>
<pre><code class="language-ocaml">let iter f x =
let rec aux x = ... f ... in
aux x
</code></pre>
<p>The second pass eliminates the same kind of functions as Ocaml 4.01, but being after the first pass, it can also inline functions revealed by inlining functors.</p>
<h2>Benchmarks</h2>
<p>I ran a few benchmarks to ensure that there were no obvious miscompilations (and there were, but they are now fixed). On benchmarks that were too carefully written there was not much gain, but I got interesting results on some examples: those illustrate quite well the improvements, and can be seen at <code>$(opam config var lib)/inlining-benchs</code> (binaries at <code>$(opam congfig var bin)/bench-*</code>).</p>
<h3>The Knuth-Bendix Benchmark (single-file)</h3>
<p>Performance gains against OCaml 4.01 are around 20%. The main difference is that exceptions are compiled to constants, hence not allocated when raised. In that particular example, this halves the allocations.</p>
<p>In general, constant exceptions can be compiled to constants when predefined (<code>Not_found</code>, <code>Failure</code>, ...). They cannot yet when user defined: to improve this a few things need to be changed in <code>translcore.ml</code> to annotate values created by exceptions.</p>
<h3>The Noiz Benchmark:</h3>
<p>Performance gains are around 30% against OCaml 4.01. This code uses a lot of higher order functions of the kind:</p>
<pre><code class="language-ocaml">let map_triple f (a,b,c) = (f a, f b, f c)
</code></pre>
<p>OCaml 4.01 can inline <code>map_triple</code> itself but then cannot inline the parameter <code>f</code>. Moreover, when writing:</p>
<pre><code class="language-ocaml">let (x,y,z) = map_triple f (1,2,3)
</code></pre>
<p>the tuples are not really used, and after inlining their allocations can be eliminated (thanks to rebinding and dead code elimination)</p>
<h3>The Set Example</h3>
<p>Performance gains are around 20% compared to OCaml 4.01. This example shows how inlining can help defunctorization: when inlining the <code>Set</code> functor, the provided comparison function can be inlined in <code>Set.add</code>, allowing direct calls everywhere.</p>
<h2>Known Bugs</h2>
<h3>Recursive Values</h3>
<p>A problem may arise in a rare case of recursive values where a field access can be considered to be a constant. Something that would look like (if it were allowed):</p>
<pre><code class="language-ocaml">type 'a v = { v : 'a }
let rec a = { v = b }
and b = (a.v, a.v)
</code></pre>
<p>I have a few solutions, but not sure yet which one is best. This probably won't appear in any normal test. This bug manifests through a segmentation fault (<code>cmmgen</code> fails to compile that recursive value reasonably).</p>
<h3>Pattern-Matching</h3>
<p>The new passes assume that every identifier is declared only once in a given module, but this assumption can be broken on some rare pattern matching cases. I will have to dig through <code>matching.ml</code> to add a substitution in these cases. (the only non hand-built occurence that I found is in <code>ocamlnet</code>)</p>
<h2>Known Mis-compilations</h2>
<ul>
<li>since there is no cross-module information at the moment, calls to functions from other modules are always slow.
</li>
<li>In some rare cases, there could be functions with more values in their closure, thus resulting in more allocations.
</li>
</ul>
<h2>What's next ?</h2>
<p>I would now like to add back cross-module information, and after a bit of cleanup the first series of patches should be ready to propose upstream.</p>
News from May and June https://ocamlpro.com/blog/2013_07_01_news_from_may_and_june2013-07-01T08:12:13Z2013-07-01T08:12:13Z
Çagdas Bozman
It is time to give a brief summary of our recent activities. As usual, our contributions were focused on three main objectives: make the OCaml compiler faster and easier to use;
make the OCaml developers more efficient by releasing new development tools and improving editor supports;
organize and pa...<p>It is time to give a brief summary of our recent activities. As usual, our contributions were focused on three main objectives:</p>
<ul>
<li>make the OCaml compiler faster and easier to use;
</li>
<li>make the OCaml developers more efficient by releasing new development tools and improving editor supports;
</li>
<li>organize and participate to community events around the language
</li>
</ul>
<p>We are also welcoming four interns who will work with us on these objectives during the summer.</p>
<h2>Compiler updates</h2>
<p>Following the ideas he announced in his recent <a href="http://ocamlpro.com/2013/05/24/optimisations-you-shouldnt-do/">blog post</a>, <a href="https://github.com/chambart">Pierre Chambart</a> has made some progress on his <a href="https://github.com/chambart/ocaml/tree/flambda_experiments">inlining branch</a>. He is currently working on stabilizing and cleaning-up the code for optimization which does not take into account inter-module information.</p>
<p>We also continue to work on our profiling tool and start to separate the different parts of the project. We have <a href="https://github.com/cago/ocaml">patched</a> the compiler and runtime, for both bytecode and native code, to generate : <code>.prof</code> files which contain the id-loc information and allow us to recover the location from the identifiers in the header of the block; and to dump a program heap in a file on demand or to monitor a running program without memory and performance overhead. <a href="http://cagdas.bozman.fr/">Çagdas Bozman</a> has presented the work he has done so far regarding his PhD to members of the <a href="http://bware.lri.fr/index.php/Presentation">Bware</a> project and we started to test our prototype on industrial use-cases using the <a href="http://why3.lri.fr/">why3</a> platform.</p>
<p>On the multi-core front, <a href="http://ageinghacker.net/">Luca Saiu</a> is continuing his post-doc with <a href="http://fabrice.lefessant.net/">Fabrice le Fessant</a> and is modifying the OCaml runtime to support parallel programming on multi-core computers. Their version of the “multi-runtime” OCaml provides a message-passing abstraction in which a running OCaml program is “split” into independent OCaml programs, one per thread (if possible running on its separate core) with a separate instance of the runtime library in order to reduce resource contention both at the software and at the hardware level. Luca is now debugging the support for OCaml multi-threading running on top of a multi-context parallel program. A recent presentation covering this work and its challenges is available <a href="http://ageinghacker.net/talks/ocaml-multiruntime-presentation.pdf">online</a>.</p>
<p>A new intern from <a href="http://www.ens-cachan.fr/">ENS Cachan</a>, <a href="https://github.com/thomasblanc">Thomas Blanc</a> is working on a whole program analysis system. His internship’s final goal is to provide a good hint of exceptions that may be left uncaught by the program, resulting a failure. It is quite interesting as exceptions are pretty much the part of the program “hard to foresee”. The main difficulty comes from higher-order functions (like <code>List.iter</code>). Because of them, a simple local analysis becomes impossible. So the first task is to take the whole program in the form of separated <code>.cmt</code> files, <a href="https://github.com/thomasblanc/ocaml-typedtree-mapper">merge</a> it, and remove every higher-order functions (either by direct inlining if possible or by a very big pattern matching). The merging as already been done through a deep browsing of the compiler’s typedtrees. Thomas is now focusing in reordering the code so that higher-order functions can be safely removed.</p>
<p>Finally, we are helping to prepare the release 4.01.0 of the OCaml compiler: Fabrice has integrated his <a href="http://ocamlpro.com/2012/08/08/profiling-ocaml-amd64-code-under-linux/">frame-pointer</a> patch, that can be used to profile the performance of OCaml applications using Linux <code>perf</code> tool; he has added in <code>Pervasives</code> <a href="https://github.com/ocaml/ocaml/commit/ace0205b6499ffdae4588cfdd640c45855217a8f">two application operators</a> that had been optimized before, but were only available for people who knew about that; he has also added a new environment variable, <code>OCAMLCOMPPARAM</code>, that can be used to change how a program is compiled by <code>ocamlc</code>/<code>ocamlopt</code>, without changing the build system (for example, <code>OCAMLCOMPPARAM='g=1' make</code> can be used to compile a project in debug mode without modifying the makefiles).</p>
<h2>Development Tools</h2>
<p>Since the initial release of <a href="http://opam.ocamlpro.com">OPAM</a> in March, we have been kept busy preparing the upcoming <code>1.1.0</code> version, which should interface nicely with the forthcoming set of automatic tools which will constitute the first version of the <a href="http://www.cl.cam.ac.uk/projects/ocamllabs/tasks/platform.html">OCaml Platform</a> that we are helping <a href="http://www.cl.cam.ac.uk/projects/ocamllabs/">OCamlLabs</a> to deliver. We have constantly been focused on fixing bugs and implementing feature requests (more than <a href="https://github.com/OCamlPro/opam/issues?direction=desc&milestone=17&page=1&sort=created&state=closed">70 issues</a> have been closed on Github) and we have recently improved the speed and reliability of <code>opam update</code>. More good news related to OPAM: The number of packages submitted to <a href="http://www.cl.cam.ac.uk/projects/ocamllabs/tasks/platform.html">official</a> repository is steadily increasing with around 20 new packages integrated every-months (and much more already existing package upgrades), and the official Debian package should land in <a href="http://ftp-master.debian.org/new/opam_1.0.0-1.html">testing</a> very soon.</p>
<p>This month, <a href="http://louis.gesbert.fr/cv.en.html">Louis</a> was still busy improving different tools for ocaml code edition. <code>ocp-index</code> and <code>ocp-indent</code>, made for the community to improve the general ocaml experience and kindly funded by <a href="http://janestreet.com">Jane Street</a>, have seen some updates:</p>
<ul>
<li><a href="https://github.com/OCamlPro/ocp-index">ocp-index</a>: the library data access tool which was first presented in <a href="http://ocamlpro.com/2013/04/22/april-monthly-report/">April</a> has seen some progress, with the ability to locate definitions and resolve type names. It is still not yet considered stable though, expect more from it soon. An early release (0.2.0) is in OPAM.
</li>
<li><a href="https://github.com/OCamlPro/ocp-indent">ocp-indent</a> the generic ocaml source code indenter, has seen its usual bunch of fixes, along with some new customization options. Also, its <a href="https://github.com/OCamlPro/ocp-indent/blob/master/src/indentPrinter.mli">library interface</a> has been rewritten, offering much better flexibility and opening the gate to uses like restarting from checkpoints to avoid full reparsing, detecting top-expression boundaries, syntax coloration, etc. We will be releasing 1.3.0 in OPAM very soon.
</li>
</ul>
<p>We are also developing in-house projects aiming at providing a better first experience of OCaml to beginners and students:</p>
<ul>
<li>the new <a href="https://github.com/OCamlPro/ocaml-top">ocaml-top</a> (previous project name <code>ocp-edit-simple</code>) aims to offer a simple, but clean and easy-to-use interface to interact with the ocaml top-level. It is intended mainly for exercises, tutorials and practicals. A release should be coming soon, the Linux version being quite stable while some bugs remain on Windows.
</li>
<li>two new interns, <a href="http://www.linkedin.com/profile/view?id=3D238971426&locale=3Dfr_FR&tr=k3Dtyah">David</a> and <a href="http://www.linkedin.com/profile/view?id=3D65173689">Pierrick</a>, have started working on a <a href="https://github.com/pcouderc/ocp-webedit">web-IDE</a> for OCaml. As students, they have seen sometimes how difficult it could be to install OCaml on some OSes, or simply configure editors like emacs or vim. To solve these issues, the idea is to use only a web browser-based editor and provide a way to compile a project without having to install anything on your computer. For the editing part, the idea is to use <a href="http://ace.ajax.org/">Ace</a> and improve it for OCaml, using <a href="https://github.com/OCamlPro/ocp-indent">ocp-indent</a> for example, which is possible by using <a href="http://ocsigen.org/js_of_ocaml/">js_of_ocaml</a>. The next step will be to glue this editor with both <a href="http://try.ocamlpro.com/">TryOCaml</a> to execute code, and a cloud computing part, to store projects and files and access them from anywhere.
</li>
</ul>
<p>We are also trying to improve cross-compilation tutorials and tools for developing native iOS application under a Linux system, using the OCaml language. <a href="http://fr.linkedin.com/pub/souhire-kenawi/6a/614/54b/">Souhire</a>, our fourth new intern, is experimenting with that idea and will document how to set up such an environment, from the foundation until the publication on the application store (if it is possible). She is starting to look at how iOS applications (with a native graphical interface) written in C can be cross-compiled on <a href="http://code.google.com/p/ios-toolchain-based-on-clang-for-linux/wiki/HowTo_en">Linux</a>, and how the ones written in OCaml can be cross-compiled on <a href="http://psellos.com/ocaml/">MacOSX</a>.</p>
<p>On the library front, Fabrice has completely rewritten the way his <a href="http://www.typerex.org/ocplib-wxOCaml.html">wxOCaml library</a> is generated, compared to what was described in a previous <a href="http://ocamlpro.com/2013/04/02/wxocaml-camlidl-and-class-modules/">blog post</a>. It does not share any code anymore with other wxWidgets bindings (wxHaskell or wxEiffel), but directly generates the stubs from a DSL (close to C++) describing the wxWidgets classes. It should make binding more widgets (classes) and more methods for each widget much easier, and also help for maintenance, evolution and compatibility with wxWidgets version. There are now an interesting set of samples in the library, covering many interesting usages.</p>
<h2>Community</h2>
<p>We have also been pretty active during the last months to promote the use of OCaml in the free-software and research community: we are actively participating to the upcoming <a href="http://ocaml.org/meetings/ocaml/2013/">OCaml 2013</a> and <a href="http://cufp.org/2013cfp">Commercial User of Functional Programming</a> conference which will be help next September in Boston.</p>
<p>While I was visiting <a href="http://janestreet.com/">Jane Street</a> with <a href="http://www.cl.cam.ac.uk/projects/ocamllabs/index.html">OCamlLabs’s team</a>, I had the pleasure to be invited to give a talk at the <a href="http://www.meetup.com/NYC-OCaml/">NYC OCaml meetup</a> on OPAM (my slides can be found online <a href="http://ocamlpro.com/pub/ny-meetup.pdf">here</a>). It was a nice meetup, with more than 20 people, hosted in the great Jane-Street New-York offices.</p>
<p>OCamlPro is still organizing OCaml meetups in Paris, hosted by <a href="http://www.irill.org/">IRILL</a> and sponsored by <a href="http://www.lexifi.com/">LexiFi</a> : our last Ocaml Users in PariS (OUPS) meetup was in <a href="http://www.meetup.com/ocaml-paris/events/116100692/">May</a>, there were more than 50 persons ! It was a nice collection of talks, where Esther Baruk spoke about the usage of OCaml at Lexifi, Benoit Vaugon about all the secrets that we always wanted to know about the OCaml bytecode, Frédéric Bour presents us Merlin, the new IDe for VIM, and Gabriel Scherer told us how to better interact with the OCaml core team.</p>
<p>We are now preparing our next <a href="http://www.meetup.com/ocaml-paris/events/121412532/">OUPS</a> meeting which will take place at IRILL on Tuesday, July 2nd. Emphasis will be on programming in OCaml in different context. Thus, there will be some js_of_ocaml experiences, GPGPU in OCaml and GADTs in practice. There is still many seats available, so do not hesitate to register to the meetup, but if you cannot, this time, videos of the talks (in French) will be available afterwards.</p>
<p>Not really related to OCaml, we also attend the <a href="http://www.teratec.eu/gb/forum/index.html">Teratec 2013 Forum</a> which brings together a lot of <a href="http://www.scilab.org/">Scilab</a> users. This is part of the <a href="http://www.richelieu.pro">Richelieu</a> research project that <a href="http://www.linkedin.com/profile/view?id=130990583">Michael</a> is working on: his goal is to analyze Scilab code, before just-in-time compilation. It requires a basic type-inference algorithm, but for a language that has not been designed for that ! He is currently struggling with the dynamic aspects of Scilab language. After some work on preprocessing <code>eval</code> and <code>evalstr</code> functions, he is now focusing on how Scilab programers usually write functions. He is currently using different kinds of analyses on real-world Scilab programs to understand how they are structured.</p>
<p>Finally, we are happy to announce that we finally found the time to release the <a href="https://github.com/OCamlPro/ocaml-cheat-sheets">sources</a> of our OCaml <a href="http://www.typerex.org/cheatsheets.html">cheat-sheets</a>. Feel free to contribute by sending patches if you are interested to improve them!</p>
Optimisations you shouldn’t dohttps://ocamlpro.com/blog/2013_05_24_optimisations_you_shouldnt_do2013-05-24T08:12:13Z2013-05-24T08:12:13Z
chambart
Doing the compiler's work Working at OCamlPro may have some drawbacks. I spend a lot of time hacking the OCaml compiler. Hence when I write some code, I have a good glimpse of what the generated assembly will look like. This is nice when I want to write performance sensitive code, but as I usually w...<h2>Doing the compiler's work</h2>
<p>Working at OCamlPro may have some drawbacks. I spend a lot of time hacking the OCaml compiler. Hence when I write some code, I have a good glimpse of what the generated assembly will look like. This is nice when I want to write performance sensitive code, but as I usually write code for which execution time doesn't matter much, this mainly tends to torture me. A small voice in my head is telling me "you shouldn't write it like that, you known you could avoid this allocation". And usually, following that indication would only tend to make the code less readable. But there is a solution to calm that voice: making the compiler smarter than me.</p>
<p>OCaml compilation mechanisms are quite predictable. There is no dark magic to replace your ugly code by a well-behaving one, but it always generates reasonably efficient code. This is a good thing in general, as you won't be surprised by code running more slowly than what you usually expect. But it does not behave very well with dumb code. This may not often seem like a problem with code written by humans, but generated code, for example coming from camlp4/ppx, or compiled from another language to OCaml, may fall into that category. In fact, there is another common source for non-human written code: inlining.</p>
<h2>Inlining</h2>
<p>Inlining (or inline expansion) is a key optimisation in many compilers and particularly in functional languages. Inlining replaces a function call by the function body. Let's apply inlining to f in this example.</p>
<pre><code class="language-OCaml">let f x = x + 1
let g y = f (f y)
</code></pre>
<p>We replace the calls to f by let for each arguments and then copy the body of f.</p>
<pre><code class="language-OCaml">let g y =
let x1 = y in
let r1 = x1 + 1 in
let x2 = r1 in
let r2 = x2 + 1 in
r2
</code></pre>
<p>Inlining allows to eliminate the cost of a call (and associated spilling), but the main point is elsewhere: it puts the code in its context, allowing its specialisation. When you look at that generated code after inlining your trained eyes will notice that it looks quite dumb. And you really want to rewrite it as:</p>
<pre><code class="language-OCaml">let g y = y + 2
</code></pre>
<p>The problem is that OCaml is compiling smart code into smart assembly, but after inlining your code is not as smart as it used to be. What is missing in the current compiler is a way to get back a nice and smart code after inlining. (To be honest, OCaml is not that bad and on that example it would generate the right code: put this on the sake of the mandatory blog-post dramatic effect.)</p>
<p>In fact you could consider inlining as two separate things: duplication and call elimination. By duplication you make a new version of the function that is specialisable in its context, and by call elimination you replace the call by specialised code. This distinction is important because there are some cases where you only want to do duplication: recursive functions.</p>
<h3>Recursive function inlining</h3>
<p>In a recursive function duplicating and removing a call is similar to loop unrolling. This can be effective in some cases, but this is not what we want to do in general. Lets try it on <code>List.map</code></p>
<pre><code class="language-OCaml">let rec list_map f l = match l with
| [] -&gt; []
| a::r -&gt; f a :: list_map f r
let l' =
let succ = (fun x -&gt; x + 1) in
list_map succ l
</code></pre>
<p>If we simply inline the body of list_map we obtain this</p>
<pre><code class="language-OCaml">let l' =
let succ = (fun x -&gt; x + 1) in
match l with
| [] -&gt; []
| a::r -&gt; succ a :: list_map succ r
</code></pre>
<p>And with some more inlining we get this which is probably not any faster than the original code.</p>
<pre><code class="language-OCaml">let l' =
let succ = (fun x -&gt; x + 1) in
match l with
| [] -&gt; []
| a::r -&gt; a + 1 :: list_map succ r
</code></pre>
<p>Instead we want the function to be duplicated.</p>
<pre><code class="language-OCaml">let l' =
let succ = (fun x -&gt; x + 1) in
let rec list_map' f l = match l with
| [] -&gt; []
| a::r -&gt; f a :: list_map' f r in
list_map' succ l
</code></pre>
<p>Now we know that <code>list_map'</code> will never escape its context hence that its f parameter will always be succ. Hence we can replace <code>f</code> by succ everywhere in its body.</p>
<pre><code class="language-OCaml">let l' =
let succ = (fun x -&gt; x + 1) in
let rec list_map' f l = match l with
| [] -&gt; []
| a::r -&gt; succ a :: list_map' succ r in
list_map' succ l
</code></pre>
<p>And we can now see that the <code>f</code> parameter is not used anymore, we can eliminate it.</p>
<pre><code class="language-OCaml">let l' =
let succ = (fun x -&gt; x + 1) in
let rec list_map' l = match l with
| [] -&gt; []
| a::r -&gt; succ a :: list_map' r in
list_map' l
</code></pre>
<p>With some more inlining and cleaning we finally obtain this nicely specialised function which will be faster than the original.</p>
<pre><code class="language-OCaml">let l' =
let rec list_map' l = match l with
| [] -&gt; []
| a::r -&gt; a + 1 :: list_map' r in
list_map' l
</code></pre>
<h3>Current state of the OCaml inliner</h3>
<p>Inlining can gain a lot, but abusing it will increase code size a lot. This is not only a problem of binary size (who cares?): if your code does not fit in processor cache anymore, its speed will be limited by memory bandwidth.</p>
<p>To avoid that, OCaml has a threshold to the function size allowed for inlining. The compiler may also refuse to inline in other cases that are not completely justified though, mainly for reasons related to its architecture:</p>
<ul>
<li>duplication and call elimination are not separated, hence recursive function duplication is not possible.
</li>
<li>functions containing structured constants or local functions are not allowed to be duplicated, preventing those functions to be inlined.
</li>
</ul>
<pre><code class="language-OCaml">let constant x =
let l = [1] in
x::l
let local_function x =
let g x = some closed function in
... g x ...
</code></pre>
<p>The assumption is that if a function contains a constant or a function it will be too big to be reasonably inlined. But there is a reasonable transformation that could allow it.</p>
<pre><code class="language-OCaml">let l = [1]
let constant x =
x::l
let g x = some closed function
let local_function x = ... g x ...
</code></pre>
<p>and then we can reasonably inline <code>constant</code> and <code>local_function</code>. Those cases are only technical limitation that could easily be lifted with the new implementation.</p>
<p>But improving the OCaml inliner is not that easy. It is well written, but it is also doing a lot of other things at the same time:</p>
<h4>Closure conversion</h4>
<p>Closure conversion transforms functions to a data structure containing a code pointer and the free variables of the function. You could imagine it as that transformation:</p>
<pre><code class="language-OCaml">let a = 1
let f x = x + a (* a is a free variable in f *)
let r = f 42
</code></pre>
<p>Here <code>a</code> is a free variable of <code>f</code>. We cannot compile <code>f</code> while it contains reference to free variables. To get rid of the free variables, we add a new parameter to the function, the environment, containing all the free variables.</p>
<pre><code class="language-OCaml">let a = 1
let f x environment =
(* the new environment parameter contains all the free variables of f *)
x + environment.a
let f_closure = { code = f; environment = { a = a } }
let r = f_closure.code 42 f_closure.environment
</code></pre>
<h4>Value analysis</h4>
<p>In functional languages inlining is not as simple as it is for languages like C because the function name does not tells you which function is used at a call site:</p>
<pre><code class="language-OCaml">let f x = (x,(fun y -&gt; y+1))
let g x =
let (a,h) = f x in
h a
</code></pre>
<p>To be able to inline h as (fun y -> y+1) the compiler needs to track which values flows to variables. This is usually called a value analysis. This example can look a bit contrived, but in practice functor application generate quite similar code. This allows for instance to inline Set.Make(...).is_empty. The result of this value analysis is used by other optimisations:</p>
<h4>Constant folding</h4>
<p>When the value analysis can determine that the result of an operation is a constant, it can remove its computation:</p>
<pre><code class="language-OCaml">let f x =
let a = 1 in
let b = a + 1 in
x + b
</code></pre>
<p>Since <code>b</code> always have the value <code>2</code> and <code>a + 1</code> does not have side effects it is possible to simplify it.</p>
<pre><code class="language-OCaml">let f x =
x + 2
</code></pre>
<h4>Direct call specialisation</h4>
<p>Sometimes it is impossible to know which function will be used at a call site:</p>
<pre><code class="language-OCaml">let f g x = g x
</code></pre>
<p>There is a common representation (the closure) that allows to call a function without knowing anything about it. Using a function through its closure is called a generic call. This is efficient, but of course not as efficient as a simple assembly call (a direct call). The work of the direct call specialisation is to turn as many as possible generic call into direct ones. In practice, the vast majority of calls can be optimised.</p>
<h2>Improving OCaml inliner</h2>
<p>The current architecture is very fast and works well on a lot of cases, but it is quite difficult to improve the handling of corner cases.</p>
<p>I have started a complete rewrite of those passes, I am currently working on splitting all those things in their own passes. The first step was to add a new intermediate representation (flambda) more suited to doing the various analysis. The main difference with the current representation (clambda) is that closures are explicitly represented, making them easier to manipulate. As a nice side effect this intermediate representation allows to plug passes in or out, or loop on them without changing anything to the architecture. But we are losing the possibility to enforce some invariants in the type of the representation, hence we need to be careful to correctly maintain them.</p>
<p>With this new architecture, the closure conversion is done first (going from lambda to flambda). Then on flambda are provided a set of simple analysis:</p>
<ul>
<li>simple intraprocedural value and alias analysis
</li>
<li>purity analysis
</li>
<li>constant analysis
</li>
<li>dead expression analysis
</li>
</ul>
<p>And there is a set of simple passes using their results:</p>
<ul>
<li>dead code elimination
</li>
<li>constant folding/direct call specialisation/type specialisation: a simple traversal replacing expressions with more efficient ones when the result of the value analysis allows it.
</li>
<li>alias rebinding: Use results of alias analysis to know when a field access can be simplified:
</li>
</ul>
<pre><code class="language-OCaml">let f x =
let tuple = (x,x) in
let (y,z) = tuple in
y + z
let f x =
x + x
</code></pre>
<p>Of course nobody would write that, but access to variables bounded in a closure can looks a lot like that after inlining:</p>
<pre><code class="language-OCaml">let f x =
let g y = x + y in
g x
</code></pre>
<p>After closure conversion we obtain this.</p>
<pre><code class="language-OCaml">let f x =
let g_closure =
{ code = fun x environment -&gt; environment.x + y;
environment = { x = x } } in
g_closure.code x g_closure.environment
</code></pre>
<p>And after inlining <code>g</code>.</p>
<pre><code class="language-OCaml">let f x =
let g_closure =
{ code = fun x environment -&gt; environment.x + y;
environment = { x = x } } in
x + closure.environment.x
</code></pre>
<p>Inlining <code>g</code> makes some code that looks a bit stupid. <code>closure.environment.x</code> is always the same value as <code>x</code>. So there is no need to access it through the structure.</p>
<pre><code class="language-OCaml">let f x =
let g_closure =
{ code = fun x environment -&gt; environment.x + y;
environment = { x = x } } in
x + x
</code></pre>
<p>Now that we have simplified the code, we notice that g_closure is not used anymore, and dead code elimination can simply get rid of it.</p>
<pre><code class="language-OCaml">let f x =
x + x
</code></pre>
<ul>
<li>a really, really dumb inliner: it inlines almost anything. Its interest is to demonstrate what can be achieved when putting some code in its context.
</li>
</ul>
<p>After the different optimisation passes we need to send the result to the compiler back-end. This is done by the final conversion from flambda to clambda, which is mainly doing a lot of bureaucratic transformations and mark constant structured values. Doing this constant marking separately also allows to improve a bit the generated code.</p>
<pre><code class="language-OCaml">let rec f x =
let g y = f y in
g x
</code></pre>
<p><code>f</code> and <code>g</code> are closed functions but the current compiler will not be able to detect it and allocate a closure for g at each call of f.</p>
<h2>Hey ! Where are the nice charts ?</h2>
<p>As you noticed that there were no fancy improvements charts, and there won't be any below. Those are demonstrations passes, the generated code can (and probably will) be worse than the one generated by the current compiler. This is mainly done to show what can be achieved by combining simple passes and simple analysis and allowing to apply them multiple times. What is needed to get fast code is to change the inlining heuristic (and re-enable cross module inlining).</p>
<p>My current work is to write more serious analysis allowing better optimisations. In particular I expect that a reasonable interprocedural value analysis could help a lot with handling recursive function specialisation.</p>
<h2>My future toys</h2>
<p>Then I'd like to play a bit with other common things like</p>
<ul>
<li>unused parameters elimination: when a function does not use one of its parameters, remove it. This is trivial with simple functions, but it can get a bit tricky with multiply recursive functions. (that kind of code can appear after constant folding with informations from interprocedural analysis )
</li>
<li>lambda lifting: turning closure into closed function by adding arguments. This can eliminate some allocations
</li>
</ul>
<pre><code class="language-OCaml">let f x =
let g y = x + y in
g 4
</code></pre>
<p>If we add the <code>x</code> parameter to <code>g</code> we can avoid building its closure each time <code>f</code> is called.</p>
<pre><code class="language-OCaml">let f x =
let g y x = x + y in
g 4 x
</code></pre>
<p>This can get quite tricky if we want to handle cases like</p>
<pre><code class="language-OCaml">let f x n =
let g i = i + x in
Array.init n g
</code></pre>
<p>We need to add a new parameter to init also to be able to pass it to g.</p>
<ul>
<li>common sub-expression elimination:
</li>
</ul>
<pre><code class="language-OCaml">let f x =
let a = x + 1 in
let b = x + 1 in
a + b
</code></pre>
<p>In <code>f</code> We clearly don't need to compute <code>x + 1</code> two times</p>
<pre><code class="language-OCaml">let f x =
let a = x + 1 in
a + a
</code></pre>
<ul>
<li>earlier unboxing: Floats are boxed in ocaml, this means that there is an indirection when accessing the constant of a value of type float. To reduce the cost of allocating and accessing floats unboxing eliminates the indirections between some operations. I'd like to try to do this as a flambda pass to be able to use the results of the value analysis.
</li>
</ul>
<p>If you want to play/hack a bit with the demo look at my <a href="https://github.com/chambart/ocaml/tree/flambda_experiments">github branch</a> (be warned, this branch may sometimes be rebased)</p>
April Monthly Report https://ocamlpro.com/blog/2013_04_22_april_monthly_report2013-04-22T08:12:13Z2013-04-22T08:12:13Z
Çagdas Bozman
This post aims at summarizing the activities of OCamlPro for the past month. As usual, we worked in three main areas: the OCaml toolchain, development tools for OCaml and R&D projects. The toolchain Our multi-runtime implementation of OCaml had gained stability. Luca fixed a lot of low-level bugs in...<p>This post aims at summarizing the activities of OCamlPro for the past month. As usual, we worked in three main areas: the OCaml toolchain, development tools for OCaml and R&D projects.</p>
<h2>The toolchain</h2>
<p>Our multi-runtime implementation of OCaml had gained stability. <a href="http://ageinghacker.net/">Luca</a> fixed a lot of low-level bugs in the “master” branch of <a href="http://www.github.com/lucasaiu/ocaml">his OCaml repository</a>, which were mainly related to the handling of signals. There are still some issues, which seem to be related to thread-switching (ie. when using OS level mutli-threading).</p>
<p>We made great progress on improved inlining strategy. In the current OCaml compiler, inlining, closure conversion and constant propagation are done in a single pass interleaved with analysis. It has served well until now, but to improve it in a way which is easily extensible in the future, it needs a complete rewrite. After a few prototypes, <a href="http://www.lsv.ens-cachan.fr/%7Echambart/">Pierre</a> is now coming up with a suitable intermediate language (IR) more suited for the job, using a dedicated value analysis to guide the simplification and inlining passes. This IR will stand between the lambda code and the C-lambda and is designed such that future specialized optimization can be easily be added. There are two good reasons for this IR: First, it is not as intrusive and reduces the extent of the modifications to the compiler, as it can be plugged between two existing passes and turned on or off using a command-line flag. Second, it can be tweaked to make the abstract interpretation more precise and efficient. For instance, we want the inlining to work with higher-order functions as well as modules and functors, performing basic defunctorization. It is still in an experimentation phase, but we are quickly converging on the API and hope to have something we can demo in the next months.</p>
<p>Our <a href="http://ocamlpro.com/2012/08/08/profiling-ocaml-amd64-code-under-linux/">frame-pointer patch</a> has also been accepted. Note that, as this patch changes the calling sconvention of OCaml programs, you cannot link together libraries compiled with and without the patch. Hence, this option will be provided as a configuration switch (<code>./configure --with-frame-pointer</code>).</p>
<p>Regarding memory profiling, we released a preliminary prototype of the memory profiler for native code. It is available in <a href="https://github.com/cago/ocaml">Çagdas</a> repository. We are still in the process of testing and evaluating the prototype before making it widely available through OPAM. As the previous bytecode prototype, you need to compile the libraries and the program you want to profile as usual in order to build a table associating unique identifier to program locations (.prof file). Then, for each allocated block, we have then patched the runtime of OCaml to encode in the header the identifier of the line which allocated it. To be able to dump the heap, you can either instrument your program, or send a signal, or set the appropriate environment variable (<code>OCAMLRUNPARAM=m</code>). Finally, you can use the profiler which will read the .prof and .cmt files in order to generate a pdf file which is the amount of memory use by type. More details on this will come soon, you can already read the <a href="https://github.com/cago/ocaml/blob/4.00.1%2Bmemprof/README">README</a> file available on github.</p>
<p>Finally, we organized a new meeting with the core-team to discuss some of the bugs in the <a href="http://caml.inria.fr/mantis">OCaml bug tracker</a>. It was the first of the year, but we are now going to have one every month, as it has a very positive impact on the involvement of everybody in fixing bugs and helps focus work on the most important issues.</p>
<h2>Development Tools for OCaml</h2>
<p>Since the latest release of <a href="http://github.com/OCamlPro/ocp-indent">ocp-indent</a>, <a href="http://louis.gesbert.fr/cv.en.html">Louis</a> continued to improve the tool. We plan to release version 1.2.0 in the next couple of days, with some bug fixes (esp. related to the handling of records) and the following new features: operators are now aligned by default (as well as opened parentheses not finishing a line) and indentation can be bounded using the <code>max_indent</code> parameter. We are also using the great <a href="http://erratique.ch/software/cmdliner">cmdliner</a> which means <code>ocp-indent</code> now has nice manual pages.</p>
<p>We are also preparing a new minor release of <a href="http://opam.ocamlpro.com/">OPAM</a>, with a few bug fixes, an improved solver heuristic and improved performance. OPAM statistics seem to converge towards round numbers, as <a href="http://github.com/OCamlPro/opam">OcamlPro/opam</a> repository has recently reached 100 “stars” on Github, <a href="http://github.com/OCamlPro/opam-repository">OCamlPro/opam-repository</a> is not very far from being forked 100 times, while the number of unique packages on <a href="http://opam.ocamlpro.com">opam.ocamlpro.com</a> is almost 400. We are also preparing the platform release, with a cleaner and simpler client API to be used by the upcoming “Ocamlot”, the automated infrastructure which will test and improve the quality and consistency of OPAM packages.</p>
<p>Last, we released a very small – but already incredibly useful tool: <a href="http://github.com/OCamlPro/ocp-index">ocp-index</a>. This new tool provides completion based on what is installed on your system, with type and documentation when available. Similarly to <code>ocp-indent</code>, the main goal of this tool is to make it easy to integrate in your editor of choice. As a proof of concept, we also distribute a small curses-based tool, called <code>ocp-browser</code>, which lets you browse interactively the libraries installed on your system, as well as an emacs binding for <code>auto-complete.el</code>. Interestingly enough, behind the scene <code>ocp-index</code> uses a <a href="https://github.com/OCamlPro/ocp-index/blob/master/src/trie.mli">lazy immutable prefix tree</a> with merge operations to efficiently store and load cmis and cmt files.</p>
<h2>Other R&D Projects</h2>
<p>We continued to work on the <a href="http://www.richelieu.pro/">Richelieu</a> project. We are currently adding basic type-inference for Scilab programs to our tool <a href="https://github.com/OCamlPro/richelieu/tree/jit-fabrice/scilab/modules/jit_ocaml/src/scilint">scilint</a>, to be able to print warnings on possible programers mistakes. A first part of the work was to understand how to automatically get rid of some of the <code>eval</code> constructs, especially <code>deff</code> and <code>evalstr</code> primitives that are often used. After this, <a href="https://github.com/Michaaell">Michael</a> manually analyzed some real-world Scilab programs to understand how typing should be done, and he is now implementing the type checker and a table of types for primitive functions.</p>
<p>We are also submitting a new project, called SOCaml, for funding by the French government. In 2010, <a href="http://www.ssi.gouv.fr/">ANSSI</a>, the French agency for the security of computer systems, commanded a study, called LAFOSEC, to understand the advantages of using functional languages in the domain of security. Early results of the study were presented in <a href="http://jfla.inria.fr/2013/programme">JFLA’2013</a>, with in particular recommandations on how to improve OCaml to use it for security applications. The goal of the SOCaml project would be to implement these recommandations, to improve OCaml, to provide additional tools to detect potential errors and to implement libraries to verify marshaled values and bytecode. We hope the project will be accepted, as it opens a new application domain for OCaml, and would allow us to work on this topic with our partners in the project, such as <a href="http://www.lexifi.com">LexiFi</a> and <a href="http://michel.mauny.net/">Michel Mauny</a>‘s team at ENSTA Paristech (the project would also contribute to their <a href="http://github.com/ocaml-bytes/ocamlcc">ocamlcc</a> bytecode-to-c compiler).</p>
wxOCaml, camlidl and Class Modules https://ocamlpro.com/blog/2013_04_02_wxocaml_camlidl_and_class_modules2013-04-02T08:12:13Z2013-04-02T08:12:13Z
Çagdas Bozman
Last week, I was bored doing some paperwork, so I decided to hack a little to relieve my mind... Looking for a GUI Framework for OCaml Beginners Some time ago, at OCamlPro, we had discussed the fact that OCaml was lacking more GUI frameworks. Lablgtk is powerful, but I don’t like it (and I expect ...<p>Last week, I was bored doing some paperwork, so I decided to hack a little to relieve my mind...</p>
<h2>Looking for a GUI Framework for OCaml Beginners</h2>
<p>Some time ago, at OCamlPro, we had discussed the fact that OCaml was lacking more GUI frameworks. Lablgtk is powerful, but I don’t like it (and I expect that some other people in the OCaml community share my opinion) for several reasons:</p>
<ul>
<li>LablGTK makes an extensive use of objects, labels and polymorphic variants. Although using these advanced features of OCaml can help expert OCaml developers, it makes LablGTK hard to use for beginners… and a good reason to have better GUIs is actually to attract beginners!
</li>
<li>GTK does not look native under Windows and Mac OS X, giving an outdated feeling about interfaces written with it.
</li>
</ul>
<p>Now, the question was, which GUI framework to support for OCaml ? A long time ago, I had heard that <a href="http://www.wxwidgets.org/">wxWidgets</a> (formerly wxWindows) had contributed to the popularity of Python at some point, and I remembered that there was a binding called <a href="http://plus.kaist.ac.kr/%7Eshoh/ocaml/wxcaml/doc/">wxCaml</a> that had been started by SooHyoung Oh a few years ago. I had managed to compile it a two years ago, but not to make the examples work, so I decided it was worth another try.</p>
<h2>From wxEiffel to wxCaml, through wxHaskell</h2>
<p>wxCaml is based on <a href="http://www.haskell.org/haskellwiki/WxHaskell">wxHaskell</a>, itself based on <a href="http://elj.sourceforge.net/projects/gui/ewxw/">wxEiffel</a>, a binding for wxWidgets done for the Eiffel programming language. Since wxWidgets is written in C++, and most high-level programming languages only support bindings to C functions, the wxEiffel developers wrote a first binding from C++ to C, called the <a href="https://github.com/OCamlPro/wxOCaml/tree/master/elj">ELJ library</a>: for each class wxCLASS of wxWidgets, and for each method Method of that class, they wrote a function wxCLASS_Method, that takes the object as first argument, the other arguments of the method, and then call the method on the first argument, with the other arguments. For example, the code for the <a href="https://github.com/OCamlPro/wxOCaml/blob/master/elj/eljwindow.cpp">wxWindow</a> looks a lot like that:</p>
<pre><code class="language-cpp">EWXWEXPORT(bool,wxWindow_Close)(wxWindow* self,bool _force)
{
return self->Close(_force);
}
</code></pre>
<p>From what I understood, they stopped maintaining this library, so the wxHaskell developers took the code and maintained it for wxHaskell. In wxHaskell, a few include files describe all these C functions. Then, they use a program ‘wxc’ that generates Haskell stubs for all these functions, in a class hierarchy.</p>
<p>In the first version of wxCaml, <a href="http://forge.ocamlcore.org/projects/camlidl/">camlidl</a> was used to generate OCaml stubs from these header files. The header files had to be modified a little, for two reasons:</p>
<ul>
<li>They are actually not correct: some parts of these header files have not been updated to match the evolution of wxWidgets API. Some of the classes for which they describe stubs does not exist anymore. The tool used by wxHaskell filters out these classes, because their names are hardcoded in its code, but camlidl cannot.
</li>
<li>camlidl needs to know more information than just what is written in C header files. It needs some attributes on types and arguments, like the fact that a char pointer is actually a string, or that a pointer argument to a function is used to return a value. See <a href="https://github.com/OCamlPro/wxOCaml/blob/master/idl/wxc_types.idl">wxc_types.idl</a> for macros to automate parts of this step.
</li>
<li>camlidl was not used a lot, and not maintained for a long time, so there are some bugs in it. For example, the names of the arguments given in IDL header files can conflict with variables generated in C by camlidl (such as “_res”) or with types of the caml C API (such as “value”).
</li>
</ul>
<p>Since the version of wxCaml I downloaded used outdated versions of wxWidgets (wxWindows 2.4.2 when current version is wxWidgets 2.9) and wxHaskell (0.7 when current version is 0.11), I decided to upgrade wxCaml to the current versions. I copied the ELJ library and the header files from the GitHub repository of wxHaskell. Unfortunately, the corresponding wxWidgets version is 2.9.4, which is not yet officially distributed by mainstream Linux distributions, so I had to compile it also.</p>
<p>After the painful work of fixing the new header files for camlidl, I was able to run a few examples of wxCaml. But I was not completely satisfied with it:</p>
<ul>
<li>To translate the relation of inheritance between classes for camlidl, wxCaml makes them equivalent, so that the child can be used where the ancestor can be used. Unfortunately, it means also that the ancestor can be used wherever the child would, and since most classes are descendant of wxObject, they can all be used in place of each other in the OCaml code !
</li>
<li>A typed version of the interface had been started, but it was already making heavy use of objects, which I had decided to ban from the new version, as other advanced features of OCaml.
</li>
</ul>
<h2>wxCamlidl, modifying camlidl for wxOCaml</h2>
<p>So, I decided to write a new typed interface, where each class would be translated into an abstract type, a module containing its methods as functions, and a few cast functions, from children to ancestors.</p>
<p>I wrote just what was needed to make two simple examples work (<a href="https://github.com/OCamlPro/wxOCaml/blob/master/examples/hello_world/hello.ml">hello_world</a> and <a href="https://github.com/OCamlPro/wxOCaml/blob/master/examples/two_panels/two_panels.ml">two_panels</a>, from wxWidgets tutorials), I was happy with the result:</p>
<p><a href="https://github.com/OCamlPro/wxOCaml/blob/master/examples/hello_world/hello.ml"><img src="http://ocamlpro.com//files/wxOCaml-screenshot-hello.png" alt="wxOCaml-screenshot-hello.png" /></a></p>
<p><a href="https://github.com/OCamlPro/wxOCaml/blob/master/examples/two_panels/two_panels.ml"><img src="http://ocamlpro.com//files/wxOCaml-screenshot-panels.png" alt="wxOCaml-screenshot-panels.png" /></a></p>
<p>But writting by hand the complete interface for all classes and methods would not be possible, so I decided it was time to write a tool for that task.</p>
<p>My first attempt at automating the generation of the typed interface failed because the basic tool I wrote didn’t have enough information to correctly do the task: sometimes, methods would be inherited by a class from an ancestor, without noticing that the descendant had its own implementation of the method. Moreover, I didn’t like the fact that camlidl would write all the stubs into a single file, and my tool into another file, making any small wxOCaml application links itself with these two huge modules and the complete ELJ library, even if it would use only a few of its classes.</p>
<p>As a consequence, I decided that the best spot to generate a modular and typed interface would be camlidl itself. I got a copy of its sources, and created a <a href="https://github.com/OCamlPro/wxOCaml/blob/master/wxCamlidl/wxmore.ml">new module in it</a>, using the symbolic IDL representation to generate the typed version, instead of the untyped version. The module would compute the hierarchy of classes, to be able to propagate statically methods from ancestors to children, and to generate cast functions from children to ancestors.</p>
<p>A first generated module, called <a href="https://github.com/OCamlPro/wxOCaml/blob/master/wxWidgets/wxClasses.mli">WxClasses</a> defines all the wxWidgets classes as abstract types:</p>
<pre><code class="language-ocaml">type eLJDragDataObject
and eLJMessageParameters
…
and wxDocument
and wxFrameLayout
and wxMenu
and wxMenuBar
and wxProcess
and …
</code></pre>
<p>Types started by “eLJ…” are classes defined in the ELJ library for wxWidgets classes where methods have to be defined to override basic behaviors.</p>
<h2>Classes as modules</h2>
<p>For each wxWidget class, a specific module is created with:</p>
<ul>
<li>the constructor function, usually called “wxnew”
</li>
<li>the methods of the class, and the methods of the ancestors
</li>
<li>the cast functions to ancestors
</li>
</ul>
<p>For example, for the <a href="https://github.com/OCamlPro/wxOCaml/blob/master/wxWidgets/wxFrame.ml">WxFrame</a> module, the tool generates <a href="https://github.com/OCamlPro/wxOCaml/blob/master/wxWidgets/wxFrame.mli">this signature</a>:</p>
<pre><code class="language-ocaml">open WxClasses
external wxnew : (* constructor *)
wxWindow -> int -> wxString -> int -> int -> int -> int -> int
-> wxFrame
= “camlidl_wxc_idl_wxFrame_Create_bytecode”
… (* direct methods *)
external setToolBar : wxFrame -> wxToolBar -> unit
= “camlidl_wxc_idl_wxFrame_SetToolBar”
… (* inherited methods *)
external setToolTip : wxFrame -> wxString -> unit
= “camlidl_wxc_idl_wxWindow_SetToolTip”
…
(* string wrappers *)
val wxnew : wxWindow -> int -> string -> int -> int -> int -> int -> int -> wxFr
ame
val setToolTip : wxFrame -> string -> unit
…
val ptrNULL : wxFrame (* a NULL pointer *)
…
external wxWindow : wxFrame -> wxWindow = “%identity” (* cast function *)
…
</code></pre>
<p>In this example, we can see that:</p>
<ul>
<li>WxFrame first defines the constructor for wxFrame objects. The constructor is later refined, because the stub makes use of wxString arguments, for which the tool creates a wrapper to use OCaml strings instead (using WxString.createUTF8 before the stub and WxString.delete after the stub).
</li>
<li>Stubs are then created for direct methods, i.e. functions corresponding to new methods of the class wxFrame. String wrappers are also produced if necessary.
</li>
<li>Stubs are also created for inherited methods. Here, “setToolTip” is a method of the class wxWindow (thus, its stub name wxWindow_SetToolTip). Normally, this function is in the WxWindow module, and takes a wxWindow as first argument. But to avoid the need for a cast from wxFrame to wxWindow to use it, we define it again here, allowing a wxFrame directly as first argument.
</li>
<li>The module also defines a ptrNULL value that can be used wherever a NULL pointer is expected instead of an object of the class.
</li>
<li>Finally, functions like “wxWindow” are cast functions from children to ancestor, allowing to use a value of type wxFrame wherever a value of type wxWindow is expected.
</li>
</ul>
<p>All functions that could not be put in such files are gathered in a module <a href="https://github.com/OCamlPro/wxOCaml/blob/master/wxWidgets/wxMisc.mli">WxMisc</a>. Finally, the tool also generates a module <a href="https://github.com/OCamlPro/wxOCaml/blob/master/wxWidgets/wxWidgets.mli">WxWidgets</a> containing a copy of all constructors with simpler names:</p>
<pre><code class="language-ocaml">…
val wxFrame : wxWindow -> int -> string -> int -> int -> int -> int -> int -> wxFrame
val wxFontMapper : unit -> wxFontMapper
…
</code></pre>
<p>and functions to ignore the results of functions:</p>
<pre><code class="language-ocaml">…
external ignore_wxFontMapper : wxFontMapper -> unit = “%ignore”
external ignore_wxFrame : wxFrame -> unit = “%ignore”
…
</code></pre>
<p>We expect wxOCaml applications to just start with “open WxWidgets” to get access to these constructors, to use functions prefixed by the class module names, and to use constants from the <a href="https://github.com/OCamlPro/wxOCaml/blob/master/wxWidgets/wxdefs.ml">Wxdefs module</a>.</p>
<p>Here is how the minimal application looks like:</p>
<pre><code class="language-ocaml">open WxWidgets
let _ =
let onInit event =
let frame_id = wxID () in
let quit_id = wxID() in
let about_id = wxID() in
(* Create toplevel frame *)
let frame = wxFrame WxWindow.ptrNULL frame_id “Hello World”
50 50 450 350 Wxdefs.wxDEFAULT_FRAME_STYLE in
WxFrame.setStatusText frame “Welcome to wxWidgets!” 0;
(* Create a menu *)
let menuFile = wxMenu “” 0 in
WxMenu.append menuFile about_id “About” “About the application” false;
WxMenu.appendSeparator menuFile;
WxMenu.append menuFile quit_id “Exit” “Exit from the application” false;
(* Add the menu to the frame menubar *)
let menuBar = wxMenuBar 0 in
ignore_int (WxMenuBar.append menuBar menuFile “&File”);
WxFrame.setMenuBar frame menuBar;
ignore_wxStatusBar (WxFrame.createStatusBar frame 1 0);
(* Handler for QUIT menu *)
WxFrame.connect frame quit_id Wxdefs.wxEVT_COMMAND_MENU_SELECTED
(fun _ -> exit 0);
(* Handler for ABOUT menu *)
WxFrame.connect frame about_id Wxdefs.wxEVT_COMMAND_MENU_SELECTED
(fun _ ->
ignore_int (
WxMisc.wxcMessageBox “wxWidgets Hello World example.”
“About Hello World”
(Wxdefs.wxOK lor Wxdefs.wxICON_INFORMATION)
(WxFrame.wxWindow frame)
Wxdefs.wxDefaultCoord
Wxdefs.wxDefaultCoord
)
);
(* Display the frame *)
ignore_bool ( WxFrame.show frame );
ELJApp.setTopWindow (WxFrame.wxWindow frame)
in
WxMain.main onInit (* Main WxWidget loop starting the app *)
</code></pre>
<h2>Testers welcome</h2>
<p>The current code can be downloaded from our <a href="http://github.com/OCamlPro/wxOCaml">repository on GitHub</a>. It should work with wxWidgets 2.9.4, and the latest version of ocp-build (1.99-beta5).</p>
<p>Of course, as I never wrote an application with wxWidgets before, I could only write a few examples, so I would really appreciate any feedback given by beta testers, especially as there might be errors in the translation to IDL, that make important functions impossible to use, that I cannot detect by myself.</p>
<p>I am also particularly interested by feedback on the use of modules for classes, to see if the corresponding style is usable. Our current feeling is that it is more verbose than a purely object-oriented style, but it is simpler for beginners, and improves the readability of code.</p>
<p>Finally, it was a short two-day hack, so it is far from finished. Especially, after hacking wxCamlidl, and looking at the code of the ELJ library, I had the feeling that we could go directly from the C++ header files, or something equivalent, to produce not only the OCaml stubs and the typed interface, but also the C++ to C bindings, and get rid completely of the ELJ library.</p>
An Indentation Engine for OCamlhttps://ocamlpro.com/blog/2013_03_18_an_indentation_engine_for_ocaml2013-03-18T08:12:13Z2013-03-18T08:12:13Z
Louis Gesbert
Since our last activity report we have released the first stable versions of two projects: OPAM, an installation manager for OCaml source packages, and ocp-indent, an indentation tool. We have already described the basics of OPAM in two precedent blog posts, so today we will focus on the release of ...<p>Since our last <a href="/blog/2013_02_18_overview_of_current_activities">activity report</a> we have released the first stable versions of two projects: <a href="https://opam.ocamlpro.com/">OPAM</a>, an installation manager for OCaml source packages, and <a href="https://github.com/OCamlPro/ocp-indent">ocp-indent</a>, an indentation tool.</p>
<p>We have already described the basics of OPAM in two precedent <a href="/blog/2013_01_17_beta_release_of_opam">blog</a> <a href="/blog/2013_03_15_opam_1.0.0_released">posts</a>, so today we will focus on the release of <code>ocp-indent</code>.</p>
<h3>Indentation should be consistent across editors</h3>
<p>When you work on a very large code-base, it is crucial to keep a
consistent indentation scheme. This is not only good for code review
purposes (when the indentation carries semantic properties) but also
when your code is starting to evolve and when the one who makes the
change is not the one who wrote the initial piece of code. In the latter
case, the variety of editors and local configurations usually leads to
lot of small changes carrying no semantic value at all (such as changing
tabs to spaces, adding few spaces at the beginning or end of lines, and
so on). This semantic noise considerably decreases the efficiency of
any code-review and change process and is usually very good at hiding
hard-to-track bugs in your code-base.</p>
<p>A few months ago, the solutions for OCaml to this indentation problem
were limited. For instance, you could write coding guidelines and hope
that all the developers in your project would follow them. If you wanted
to be more systematic, you could create and share a common
configuration file for some popular editors (most OCaml developers use
the emacs’ <code>tuareg-mode</code> or vim) but it is very hard to get
consistent indentation result across multiple tools. Moreover, having to
rely on a specific editor mode means that it is harder to fully
automatize the indentation process, for instance when setting-up a VCS
hook.</p>
<p>In order to overcome these pitfalls, <a href="https://www.janestreet.com/">Jane Street</a> asked us to design a new external tool with the following high-level specification:</p>
<ul>
<li>it should be easy to use inside and outside any editor;
</li>
<li>it should understand the OCaml semantics and reflect it in the indentation;
</li>
<li>it should be easy to maintain and to extend;
</li>
</ul>
<p>So we started to look at the OCaml tools’ ecosystem and we found an early prototype of Jun Furuse’s <a href="https://bitbucket.org/camlspotter">ocaml-indent</a>.
The foundation looked great but the result on real-world code sources
was not as nice as it could be, so we decided to start from this base to
build our new tool, that we called <code>ocp-indent</code>. Today, <code>ocp-indent</code> and <code>ocaml-indent</code> do not have much code in common anymore, but the global architecture of the system remains the same.</p>
<h3>Writing an indentation engine for OCaml</h3>
<p>An indentation engine may seem like a rather simple problem: given
any line in the program, we want to compute its indentation level, based
on the code structure.</p>
<p>It turns out to be much more difficult than that, mainly because
indentation is only marginally semantic, and, worse, is a matter of
taste and “proper layout”. In short, it’s not a problem that can be
expressed concisely, because one really does want lots of specific cases
handled “nicely”, depending on the on-screen layout — position of line
breaks — rather than the semantic structure. <code>Ocp-indent</code>
does contain lots of ad-hoc logic for such cases. To make things harder,
the OCaml syntax is known to be difficult to handle, with a few
ambiguities.</p>
<h4>Indent process</h4>
<p><code>Ocp-indent</code> processes code in a simple and efficient way:</p>
<ul>
<li>We lex the input with a <a href="https://github.com/OCamlPro/ocp-indent/blob/master/src/approx_lexer.mll">modified version of the OCaml lexer</a>,
to guarantee complete consistency with OCaml itself. The parser had to
be modified to be more robust (ocaml fails on errors, the indentation
tool should not) and to keep tokens like comments, quotations, and, in
the latest version, some ocamldoc block delimiters.
</li>
<li>Taking the token stream as input, we maintain a <a href="https://github.com/OCamlPro/ocp-indent/blob/master/src/indentBlock.ml">“block” stack</a>
that keeps informations like the kinds of blocks we have been through
to get to the cursor position, the column and the indentation
parameters. For instance, the “block” stack <code>[KBody KFfun; KLet; KBody KModule]</code> corresponds to the position of <code>X</code> in the following piece of (pseudo-) code:
</li>
</ul>
<pre><code class="language-ocaml">…
module Foo = struct
…
let f = fun a &> X
</code></pre>
<ul>
<li>Each token may look up the stack to find its starting counterpart (<code>in</code> will look for <code>KLet</code>, etc.), or disambiguate (<code>=</code> will look for <code>KLet</code>, stopping on opening tokens like <code>KBracket</code>,
and will be inserted as an operator if none is found). This is flexible
enough to allow for “breaking” the stack when incorrect grammar is
found. For example, the unclosed paren in <code>module let x = ( end</code> should not break indent after the <code>end</code>. Great care was taken in deciding what tokens should be able to remove from the stack in which conditions.
</li>
<li>The stack can also be used to find a token that we want to align on, typically bars <code>|</code> in a pattern-matching.
</li>
<li>On every line break, the stack can be used to compute the indentation of the next line.
</li>
<li>In the case of partial file indentation (typically, reindenting one
line or a single block), on lines that shouldn’t be reindented the stack
is reversely updated to adapt to the current indentation.
</li>
</ul>
<h4>Priorities</h4>
<p>The part where some abstraction can be put into the engine is the
knowledge of the semantics, and more precisely of the scope of the
operations. It’s also in that case that the indenter can help you write,
and not only read, your code. On that matter, <code>ocp-indent</code>
has a knowledge of the precedence of operators and constructs that is
used to know how far to unwind the stack, and what to align on. For
example, a <code>;</code> will flush function applications and most operators.</p>
<p>It is that part that gives it the most edge over <code>tuareg</code>,
and avoids semantically incorrect indents. All infix operators are
defined with a priority, a kind of indentation (indentation increment or
alignment over the above concerned expression), and an indentation
value (number of spaces to add). So for example most operators have a
priority lower than function application, but not <code>.</code>, which yields correct results for:</p>
<pre><code class="language-ocaml">let f =
somefun
record.
field
y
+ z
</code></pre>
<p>Boolean operators like <code>&&</code> and <code>||</code> are setup for alignment instead of indentation:</p>
<pre><code class="language-ocaml">let r = a
|| b
&& c
|| d
</code></pre>
<p>Additionally, some special operators are wanted with a <em>negative</em> alignment in some cases. This is also handled in a generic way by the engine. In particular, this is the case for <code>;</code> or <code>|</code>:</p>
<pre><code class="language-ocaml">type t = A
| B
let r = { f1 = x
; f2 = y
}
</code></pre>
<h4>A note on the integration in editors</h4>
<p><code>ocp-indent</code> can be used on the command-line to reindent whole files (or part of them with <code>--lines</code>),
but the most common use of an indenter is from an editor. If you are
lucky enough to be able to call OCaml code from your editor, you can use
it directly as a library, but otherwise, the preferred way is to use
the option <code>--numeric</code>: instead of reprinting the file
reindented, it will only output indentation levels, which you can then
process from your editor (for instance, using <code>indent-line-to</code> with emacs). That should be cheaper and will help preserve cursor position, etc.</p>
<p>Currently, a simple emacs binding working on either the ocaml or the
tuareg mode is provided, together with a vim mode contributed by Raphaël
Proust and David Powers.</p>
<h3>Results</h3>
<p>We’ve built <code>ocp-indent</code> based on a growing collection of <a href="https://github.com/OCamlPro/ocp-indent/tree/master/tests/passing">unit-tests</a>. If you find an indentation bug, feel free to <a href="https://github.com/OCamlPro/ocp-indent/issues">send us</a> a code snippet that we will incorporate into our test suite.</p>
<p>Our tests clearly show that the deep understanding that <code>ocp-indent</code>
has of the OCaml syntax makes it shines on specific cases. We are still
discussing and evaluating the implementation of few corner-cases
related, see for instance the <a href="http://htmlpreview.github.com/?https://github.com/OCamlPro/ocp-indent/blob/master/tests/failing.html">currently failing tests</a>.</p>
<p>We have also run some <a href="https://htmlpreview.github.com/?https://github.com/AltGr/ocp-indent-tests/blob/master/status.html">benchmarks</a> on real code-bases and the result is quite conclusive: <code>ocp-indent</code>
is always better than tuareg! This is a very nice result as most of the
existing source files are either indented manually or are following
tuareg standards. But <code>ocp-indent</code> is also orders of magnitude faster, which means you can integrate it seamlessly into any automatic process.</p>
OPAM 1.0.0 releasedhttps://ocamlpro.com/blog/2013_03_15_opam_1.0.0_released2013-03-15T08:12:13Z2013-03-15T08:12:13Z
Thomas Gazagnaire
I am very happy to announce the first official release of OPAM! Many of you already know and use OPAM so I won't be long. Please read beta-release-of-opam for a longer description. 1.0.0 fixes many bugs and add few new features to the previously announced beta-release. The most visible new feature, ...<p>I am <em>very</em> happy to announce the first official release of OPAM!</p>
<p>Many of you already know and use OPAM so I won't be long. Please read
<a href="/blog/2013_01_17_beta_release_of_opam">beta-release-of-opam</a> for a
longer description.</p>
<p>1.0.0 fixes many bugs and add few new features to the previously announced
beta-release.</p>
<p>The most visible new feature, which should be useful for beginners with
OCaml and OPAM, is an auto-configuration tool. This tool easily enables all
the features of OPAM (auto-completion, fix the loading of scripts for the
toplevel, opam-switch-eval alias, etc). This tool runs interactively on each
<code>opam init</code> invocation. If you don't like OPAM to change your configuration
files, use <code>opam init --no-setup</code>. If you trust the tool blindly, use
<code>opam init --auto-setup</code>. You can later review the setup by doing
<code>opam config setup --list</code> and call the tool again using <code>opam config setup</code>
(and you can of course manually edit your ~/.profile (or ~/.zshrc for zsh
users), ~/.ocamlinit and ~/.opam/opam-init/*).</p>
<p>Please report:</p>
<ul>
<li>Bug reports and feature requests for the OPAM tool: <code>https://github.com/OCamlPro/opam/issues</code>
</li>
<li>Packaging issues or requests for a new packages: <code>https://github.com/OCamlPro/opam-repository/issues</code>
</li>
<li>General queries to: <code>https://lists.ocaml.org/listinfo/platform</code>
</li>
<li>More specific queries about the internals of OPAM to: <code>https://lists.ocaml.org/listinfo/opam-devel</code>
</li>
</ul>
<h2>Install</h2>
<p>Packages for Debian and OSX (at least homebrew) should follow shortly and
I'm looking for volunteers to create and maintain rpm packages. The binary
installer is up-to-date for Linux and Darwin 64-bit architectures, the
32-bit version for Linux should arrive shortly.</p>
<p>If you want to build from sources, the full archive (including dependencies)
is available here:</p>
<p><code>https://github.com/ocaml/opam/releases/tag/2.1.0</code></p>
<h3>Upgrade</h3>
<p>If you are upgrading from 0.9.* you won't have anything special to do apart
installing the new binary. You can then update your package metadata by
running <code>opam update</code>. If you want to use the auto-setup feature, remove the
"eval <code>opam config env</code> line you have previously added in your ~/.profile
and run <code>opam config setup --all</code>.</p>
<p>So everything should be fine. But you never know ... so if something goes
horribly wrong in the upgrade process (of if your are upgrading from an old
version of OPAM) you can still trash your ~/.opam, manually remove what OPAM
added in your ~/.profile (~/.zshrc for zsh users) and ~/.ocamlinit, and
start again from scratch.</p>
<h3>Random stats</h3>
<p>Great success on github. Thanks everybody for the great contributions!</p>
<p><code>https://github.com/OCamlPro/opam</code>: +2000 commits, 26 contributors
<code>https://github.com/OCamlPro/opam-repository</code>: +1700 commits, 75 contributors, 370+ packages</p>
<p>on <code>http://opam.ocamlpro.com/</code>
+400 unique visitor per week, 15k 'opam update' per week
+1300 unique visitor per month, 55k 'opam update' per month
3815 unique visitor since the alpha release</p>
<h3>Changelog</h3>
<p>The full change-log since the beta release in January:</p>
<p>1.0.0 [Mar 2013]</p>
<ul>
<li>Improve the lexer performance (thx to @oandrieu)
</li>
<li>Fix various typos (thx to @chaudhuri)
</li>
<li>Fix build issue (thx to @avsm)
</li>
</ul>
<p>0.9.6 [Mar 2013]</p>
<ul>
<li>Fix installation of pinned packages on BSD (thx to @smondet)
</li>
<li>Fix configuration for zsh users (thx to @AltGr)
</li>
<li>Fix loading of <code>~/.profile</code> when using dash (eg. in Debian/Ubuntu)
</li>
<li>Fix installation of packages with symbolic links (regression introduced in 0.9.5)
</li>
</ul>
<p>0.9.5 [Mar 2013]</p>
<ul>
<li>If necessary, apply patches and substitute files before removing a package
</li>
<li>Fix <code>opam remove <pkg> --keep-build-dir</code> keeps the folder if a source archive is extracted
</li>
<li>Add build and install rules using ocamlbuild to help distro packagers
</li>
<li>Support arbitrary level of nested subdirectories in packages repositories
</li>
<li>Add <code>opam config exec "CMD ARG1 ... ARGn" --switch=SWITCH</code> to execute a command in a subshell
</li>
<li>Improve the behaviour of <code>opam update</code> wrt. pinned packages
</li>
<li>Change the default external solver criteria (only useful if you have aspcud installed on your machine)
</li>
<li>Add support for global and user configuration for OPAM (<code>opam config setup</code>)
</li>
<li>Stop yelling when OPAM is not up-to-date
</li>
<li>Update or generate <code>~/.ocamlinit</code> when running <code>opam init</code>
</li>
<li>Fix tests on *BSD (thx Arnaud Degroote)
</li>
<li>Fix compilation for the source archive
</li>
</ul>
<p>0.9.4 [Feb 2013]</p>
<ul>
<li>Disable auto-removal of unused dependencies. This can now be enabled on-demand using <code>-a</code>
</li>
<li>Fix compilation and basic usage on Cygwin
</li>
<li>Fix BSD support (use <code>type</code> instead of <code>which</code> to detect existing commands)
</li>
<li>Add a way to tag external dependencies in OPAM files
</li>
<li>Better error messages when trying to upgrade pinned packages
</li>
<li>Display <code>depends</code> and <code>depopts</code> fields in <code>opam info</code>
</li>
<li><code>opam info pkg.version</code> shows the metadata for this given package version
</li>
<li>Add missing <code>doc</code> fields in <code>.install</code> files
</li>
<li><code>opam list</code> now only shows installable packages
</li>
</ul>
<p>0.9.3 [Feb 2013]</p>
<ul>
<li>Add system compiler constraints in OPAM files
</li>
<li>Better error messages in case of conflicts
</li>
<li>Cleaner API to install/uninstall packages
</li>
<li>On upgrade, OPAM now perform all the remove action first
</li>
<li>Use a cache for main storing OPAM metadata: this greatly speed-up OPAM invocations
</li>
<li>after an upgrade, propose to reinstall a pinned package only if there were some changes
</li>
<li>improvements to the solver heuristics
</li>
<li>better error messages on cyclic dependencies
</li>
</ul>
<p>0.9.2 [Jan 2013]</p>
<ul>
<li>Install all the API files
</li>
<li>Fix <code>opam repo remove repo-name</code>
</li>
<li>speed-up <code>opam config env</code>
</li>
<li>support for <code>opam-foo</code> scripts (which can be called using <code>opam foo</code>)
</li>
<li>'opam update pinned-package' works
</li>
<li>Fix 'opam-mk-repo -a'
</li>
<li>Fix 'opam-mk-repo -i'
</li>
<li>clean-up pinned cache dir when a pinned package fails to install
</li>
</ul>
<p>0.9.1 [Jan 2013]</p>
<ul>
<li>Use ocaml-re 1.2.0
</li>
</ul>
An Overview of our Current Activities https://ocamlpro.com/blog/2013_02_18_overview_of_current_activities2013-02-18T08:12:13Z2013-02-18T08:12:13Z
Çagdas Bozman
From the early days of OCamlPro, people have been curious about our plans; they were asking how we worked at OCamlPro and what we were doing exactly. Now that we have started releasing projects more regularly, these questions come again. They are very reasonable questions, and have resolved to be mo...<p>From the early days of OCamlPro, people have been curious about our plans; they were asking how we worked at OCamlPro and what we were doing exactly. Now that we have started releasing projects more regularly, these questions come again. They are very reasonable questions, and have resolved to be more public and communicate more regularly. This post covers our activities from the beginning of 2013 and updates are scheduled on a monthly basis.</p>
<h2>OCamlPro ?</h2>
<p>OCamlPro has been created to promote the use of OCaml in the industry. In order to do so, we provide a wide range of services targeted at all stages of typical software projects: we train engineers and we improve the efficiency and usability of the OCaml compiler and tools, we help design new projects, advise on which open-source software components to use and how, we help deliver OCaml software projects and we do custom software development. One extra focus is the increase of the accessibility of OCaml for beginners and students.</p>
<p>Our customers are well-known industrial users such as <a href="http://www.janestreet.com/">Jane-Street</a>, <a href="http://www.citrix.com/">Citrix</a> and <a href="http://www.lexifi.com/">Lexifi</a> but we also help individual developers lost in the wild of non-OCaml environments inter-operate OCaml with other components. We also believe that collaborative R&D projects are a great opportunity to make existing companies discover OCaml and its benefits to their products and we are involved in several of them (see below).</p>
<p>Our engineering team is steadily growing (currently 9 full-time engineers in a joint lab between OCamlPro and INRIA) located in Paris and Nice. We gather a wide range of technical skills and industrial world expertise as we are all coming from major academic and industrial actors such as <a href="http://www.inria.fr/">INRIA</a>, [text](Dassaut Systèmes), <a href="http://www.mlstate.com/">MLstate</a> and <a href="http://www.citrix.com/">Citrix</a>. We also love the OCaml open-source ecosystem: we have been participating to the development of <a href="http://ocsigen.org/">ocsigen</a>, <a href="http://www.openmirage.org/">mirage</a>, <a href="http://www.xen.org/products/cloudxen.html">XCP</a>, <a href="http://mldonkey.sourceforge.net/">mldonkey</a>, <a href="http://www.marionnet.org/EN/">marionet</a> and so on. By the way, OCamlPro has some open <a href="/jobs">positions</a> and we are still looking to hire excellent software engineers!</p>
<h2>OCaml Distribution</h2>
<p>The first of our technical activities is related to work on the OCaml distribution itself. We are part of the OCaml compiler development team - our INRIA members are part of the <a href="http://gallium.inria.fr/">Gallium</a> project which develops OCaml at INRIA - and we regularly contribute patches to improve the usability and performance of the compiler itself.</p>
<p>We have recently proposed <a href="http://caml.inria.fr/mantis/view.php?id=5894">a series of patches</a> to improve the performance of functions with float arguments and we have started developing a <a href="https://github.com/chambart/ocp-bench">framework</a> to benchmark the efficiency of compiler optimizations.</p>
<p>We are also actively exploring the design-space for concurrency and distribution in OCaml, with an implementation of</p>
<ul>
<li>reentrant runtime
</li>
<li>way to instantiate different runtimes in separate system threads in the same process
</li>
<li>efficient multi-scale communication library, between threads and between processes.
</li>
</ul>
<p>We call this <strong>multi-runtime OCaml</strong> and a prototype is available on <a href="https://github.com/lucasaiu/ocaml/tree/master">github</a>.</p>
<p>Last, we are also making progress with the memory profiling tools. We work on a modified OCaml runtime which can store the location of each allocated block in the heap, with hooks to dump that heap on demand. External tools can then use that dump to produce useful statistics about memory usage of the program. The good news is that we now have a working and usable bytecode runtime and an external tool that produces basic memory information. We are preparing an alpha release in the next month, so stay tuned!</p>
<h2>Development Tools</h2>
<p>Our efforts to make OCaml more usable go further than looking at the compiler. We are improving the development tools already existing in the community, such as the recently released <a href="https://github.com/OCamlPro/ocp-indent">indentation tool</a> which was initially coming from an experiment from <a href="https://bitbucket.org/camlspotter/ocaml-indent">Jun Furuse</a>, and creating new ones when the lack is blatant.</p>
<p>Most recent news on that front concern <a href="http://opam.ocamlpro.com/">OPAM</a>, the package manager for OCaml that we are developing since mid-2012. For people not familiar with it yet, OPAM is a source-based package manager for OCaml. It supports resolution of complex dependency constraints between packages, multiple simultaneous compiler installations, flexible package constraints, and a Git-friendly development workflow. The beta release was announced in January, and we expect the first official release to happen in the next weeks. The OCaml community has gratefully welcomed OPAM, and the <a href="https://github.com/OCamlPro/opam-repository">repository of its package metadata</a> has already become the most forked OCaml project on github! Interestingly, two meetups have gathered more than fifty OPAM users in <a href="http://meetup.ocaml-lang.fr/">Paris</a> and Cambridge in January. We really hope this kind of meetup can be generalized: if you want to help us organize one in your area, feel free to contact us!</p>
<p>The other major part of our work around development tools for OCaml is TypeRex. TypeRex is a collection of tools which focus on improving developer efficiency, and lowering the entry barrier for experienced developers who are used to shiny IDEs in other languages. The first version of <a href="http://www.typerex.org/">TypeRex</a>, that was released last year, was a first step in this direction: it provided an enhanced emacs mode for OCaml code, with colorization, indentation, refactoring facilities, completion, code-navigation, documentation tooltips, etc. The next version of TypeRex (simply dubbed <strong>typerex2</strong>) is underway, with more independent tools (like ocp-indent), less tightly coupled to Emacs, and focused on better integration with various IDEs. If you are interested in following the progress of these tools, you can check the <strong>typerex2</strong> OPAM packages with 1.99.X+beta version numbers, which we release on a regular basis.</p>
<h2>R&D projects</h2>
<p>The idea that OCaml is the right choice to create new innovative products is at the core of OCamlPro. We are very involved in the research community, especially on Functional Languages, with participation into the Program Committees of various conferences such as <a href="http://oud.ocaml.org/">the OCaml User and Developer (OUD)</a> workshop and <a href="http://cufp.org/">the Commercial User of Functional Programming (CUFP)</a> conference. We also joined two collaborative R&D projects in 2012, the <a href="http://www.richelieu.pro/">Richelieu FUI</a> and <a href="http://bware.lri.fr/index.php/BWare_project">BWare ANR</a>. As part of the Richelieu project, we are developing a JIT compiler for the <a href="http://www.scilab.org/">Scilab language</a>. As part of the Bware project, we improve the efficiency of automatic theorem provers, with a specific focus on <a href="http://alt-ergo.lri.fr/">Alt-Ergo</a>, an SMT solver particularly suited to program verification. We are always interested in bringing our expertise in compiler technologies and knowledge of complex and distributed systems to new R&D projects: contact us if you are interested!</p>
<p>In the Richelieu project, our combined technical and theoretical expertise proved particularly effective. The research consortium is led by <a href="http://www.scilab-enterprises.com/">Scilab Entreprises</a> which needed a safer and more efficient execution engine for Scilab in order to compete with Matlab. We joined the consortium to implement the early analysis required by the JIT compiler. The project started last December, and we have since specified the semantics of the language and implemented a working prototype of an interpreter that is already as fast as the current C++ engine of Scilab 6.</p>
<h2>Growing the Community</h2>
<p>Our last important domain of activity is geared towards the OCaml community. It is important to us that the community grows bigger, and to achieve this goal there are some basic blocks that we need to help build, together with the other main actors of the community.</p>
<p>The first missing block is a good reference documentation. This year will end with (at least) one new important book for the language: <a href="http://www.realworldocaml.org/">Real-World OCaml</a> which targets experienced software engineers who do not know OCaml yet. We collaborate with <a href="http://www.cl.cam.ac.uk/projects/ocamllabs/">OCamlLabs</a> to make the technical experience of this book a success. We also work to improve the general experience of using OCaml for complete beginners by creating a stable <a href="https://github.com/OCamlPro/ocp-edit-simple">replacement</a> to the broken <strong>ocamlwin</strong>, the simple editor distributed with OCaml on Windows.</p>
<p>It is also important to us that OCaml uses the web as a platform to attract new users, as is becoming the norm for modern programming languages. We are members of the <a href="http://www.ocaml.org/">ocaml.org</a> building effort and have created <a href="http://try.ocamlpro.com/">tryocaml</a> to let newcomers easily discover the language directly from their browser. TryOcaml has been welcomed as a great tool, already adopted and adapted: see for instance <a href="http://rtt.forge.ocamlcore.org/tryrtt.html">tryrtt</a> or <a href="http://rml.lri.fr/tryrml/">try ReactiveML</a>. We are in the process of simplifying the integration with other compiler variants.
Last, but not least, we collaborate very closely with OCamlLabs to create the OCaml Plateform: a consistent set of libraries, thoroughly tested and integrated, with a rolling release schedule of 6 months. The platform will be based on <a href="http://opam.ocamlpro.com/">OPAM</a> and we are currently designing and prototyping a testing infrastructure to improve and guarantee the quality of packages.</p>
Beta Release of OPAMhttps://ocamlpro.com/blog/2013_01_17_beta_release_of_opam2013-01-17T08:12:13Z2013-01-17T08:12:13Z
Louis Gesbert
OPAM is a source-based package manager for OCaml. It supports multiple simultaneous compiler installations, flexible package constraints, and a Git-friendly development workflow. I have recently announced the beta-release of OPAM on the caml-list, and this blog post introduces the basics to new OPAM...<p>OPAM is a source-based package manager for OCaml. It supports
multiple simultaneous compiler installations, flexible package
constraints, and a Git-friendly development workflow. I have recently
announced the beta-release of OPAM on the <a href="https://sympa.inria.fr/sympa/arc/caml-list/2013-01/msg00073.html">caml-list</a>, and this blog post introduces the basics to new OPAM users.</p>
<h3>Why OPAM</h3>
<p>We have decided to start writing a brand new package manager for
OCaml in the beginning of 2012, after looking at the state of affairs in
the OCaml community and not being completely satisfied with the
existing solutions, especially regarding the management of dependency
constraints between packages. Existing technologies such as GODI, oasis,
odb and ocamlbrew did contain lots of good ideas that we shamelessly
stole but the final user-experience was not so great — and we disagreed
with some of the architectural choices, so it wasn’t so easy to
contribute to fix the existing flaws. Thus we started to discuss the
specification of a new package manager with folks from <a href="https://www.janestreet.com/">Jane Street</a> who decided to fund the project and from the <a href="https://www.mancoosi.org/">Mancoosi project</a>
to integrate state-of-the-art dependency management technologies. We
then hired an engineer to do the initial prototyping work — and this
effort finally gave birth to OPAM!</p>
<h3>Installing OPAM</h3>
<p>OPAM packages are already available for homebrew, macports and
arch-linux. Debian and Ubuntu packages should be available quite soon.
In any cases, you can either use a <a href="https://github.com/ocaml/opam/blob/master/shell/opam_installer.sh">binary installer</a> or simply install it from <a href="https://github.com/OCamlPro/opam/archive/0.9.1.tar.gz">sources</a>. To learn more about the installation process, read the <a href="https://opam.ocamlpro.com/doc/Quick_Install.html">installation instructions</a>.</p>
<h3>Initializing OPAM</h3>
<p>Once you’ve installed OPAM, you have to initialize it. OPAM will store all its state under <code>~/.opam</code>,
so if you want to reset your OPAM configuration, simply remove that
directory and restart from scratch. OPAM can either use the compiler
installed on your system or it can also install a fresh version of the
compiler:</p>
<pre><code class="language-shell-session">$ opam init # Use the system compiler<br>
$ opam init –comp 4.00.1 # Use OCaml 4.00.1<br>
</code></pre>
<p>OPAM will prompt you to add a shell script fragment to your <code>.profile</code>.
It is highly recommended to follow these instructions, as it let OPAM
set-up correctly the environment variables it needs to compile and
configure the packages.</p>
<h3>Getting help</h3>
<p>OPAM user manual is integrated:</p>
<pre><code class="language-shell-session">$ opam –help # Get help on OPAM itself
$ opam init –help # Get help on the init sub-command
</code></pre>
<h3>Basic commands</h3>
<p>Once OPAM is initialized, you can ask it to list the available
packages, get package information and search for a given pattern in
package descriptions:</p>
<pre><code class="language-shell-session">$ opam list *foo* # list all the package containing ‘foo’ in their name
$ opam info foo # Give more information on the ‘foo’ package
$ opam search foo # search for the string ‘foo’ in all package descriptions
</code></pre>
<p>Once you’ve found a package you would like to install, just run the usual <code>install</code> command.</p>
<pre><code class="language-shell-session">$ opam install lwt # install lwt and its dependencies
$ opam remove lwt # remove lwt and its dependencies
</code></pre>
<p>Later on, you can check whether new packages are available and you can upgrade your package installation.</p>
<pre><code class="language-shell-session">$ opam update # check if new packages are available
$ opam upgrade # upgrade your packages to the latest version
</code></pre>
<p>Casual users of OCaml won’t need to know more about OPAM. Simply
remind to update and upgrade OPAM regularly to keep your system
up-to-date.</p>
<h3>Use-case 1: Managing Multiple Compilers</h3>
<p>A new release of OCaml is available and you want to be able to use it. How to do this in OPAM ? This is as simple as:</p>
<pre><code class="language-shell-session">$ opam update # pick-up the latest compiler descriptions
$ opam switch 4.00.2 # switch to the new 4.00.2 release
$ opam switch export –switch=system | opam switch import -y
</code></pre>
<p>The first line will get the latest package and compiler descriptions,
and will tell you if new packages or new compilers are available.
Supposing that 4.00.2 is now available, you can then <code>switch</code>
to that version using the second command. The last command imports all
the packages installed by OPAM for the OCaml compiler installed on your
system (if any).</p>
<p>You can also easily use the latest unstable version of OCaml if you want to give it a try:</p>
<pre><code class="language-shell-session">$ opam switch 4.01.0dev+trunk # install trunk
$ opam switch reinstall 4.01.0dev+trunk # reinstall trunk
</code></pre>
<p>Reinstalling trunk means getting the latest changesets and recompiling the packages already installed for that compiler switch.</p>
<h3>Use-case 2: Managing Multiple Repositories</h3>
<p>Sometimes, you want to let people use a new version of your software
early. Or you are working in a company and expose internal libraries to
your coworkers but you don’t want them to be available to anybody using
OPAM. How can you do that with OPAM? It’s easy! You can set-up your own
repository (see for instance <a href="https://github.com/xen-org/opam-repo-dev/">xen-org</a>‘s development packages) and add it to your OPAM configuration:</p>
<pre><code class="language-shell-session">$ opam repository list # list the repositories available in your config
$ opam repository add xen-org git://github.com/xen-org/opam-repo-dev.git
$ opam repository list # new xen-org repository available
</code></pre>
<p>This will add the repository to your OPAM configuration and it will display the newly available packages. The next time you run <code>opam update</code> OPAM will then scan for any change in the remote git repository.</p>
<p>Repositories can either be local (e.g. on your filesystem), remote (available through HTTP) and stored in git or darcs.</p>
<h3>Use-case 3: Using Development Packages</h3>
<p>You want to try the latest version of a package which have not yet
been released, or you have a patched version of a package than you want
to try. How could you do it? OPAM has a <code>pin</code> sub-command which let you do that easily:</p>
<pre><code class="language-shell-session">$ opam pin lwt /local/path/
$ opam install lwt # install the version of lwt stored in /local/path
</code></pre>
<p>You can also use a given branch in a given git repository. For instance, if you want the library <code>re</code> to be compiled with the code in the <code>experimental</code> branch of its development repository you can do:</p>
<pre><code class="language-shell-session">$ opam pin re git://github.com/ocaml/ocaml-re.git#experimental
$ opam install re
</code></pre>
<p>When building the packages, OPAM will use the path set-up with the
pin command instead of using the upstream archives. Also, on the next
update, OPAM will automatically check whether some changes happened and
if the packages needs to be recompiled:</p>
<pre><code class="language-shell-session">$ opam update lwt # check for changes in /local/path
$ opam update re # check for change in the remote git branch
$ opam upgrade lwt re # upgrade re and lwt if necessary
</code></pre>
<h3>Conclusion</h3>
<p>I’ve briefly explained some of the main features of OPAM. If you want to go further, I would advise to read the <a href="https://opam.ocamlpro.com/doc/Advanced_Usage.html">user</a> and <a href="https://opam.ocamlpro.com/doc/Packaging.html">packager</a> tutorials. If you really want to understand the internals of OPAM, you can also read the <a href="https://github.com/OCamlPro/opam/blob/master/doc/dev-manual/dev-manual.pdf?raw=true">developer manual</a>.</p>
OCamlPro’s Contributions to OCaml 4.00.0https://ocamlpro.com/blog/2012_08_20_ocamlpro_contributions_to_4002012-08-20T08:12:13Z2012-08-20T08:12:13Z
Fabrice Le Fessant
OCaml 4.00.0 has been released on July 27, 2012. For the first time, the new OCaml includes some of the work we have been doing during the last year. In this article, I will present our main contributions, mostly funded by Jane Street and Lexifi. Binary Annotations for Advanced Development Tools OCa...<p>OCaml 4.00.0 has been released on July 27, 2012. For the first time,
the new OCaml includes some of the work we have been doing during the
last year. In this article, I will present our main contributions,
mostly funded by Jane Street and Lexifi.</p>
<h2>Binary Annotations for Advanced Development Tools</h2>
<p>OCaml 4.00.0 has a new option <code>-bin-annot</code> (undocumented, for now, as
it is still being tested). This option tells the compiler to dump in
binary format a compressed version of the typed tree (an abstract
syntax tree with type annotations) to a file (with the <code>.cmt</code>
extension for implementation files, and <code>.cmti</code> for interface
files). This file can then be used by development tools to provide new
features, based on the full knowledge of types in the sources. One of
the first tools to use it is the new version of <code>ocamlspotter</code>, by Jun
Furuse.</p>
<p>This new option will probably make the old option <code>-annot</code> obsolete
(except, maybe, in specific contextes where you don’t want to depend
on the internal representation of the typedtree, for example when you
are modifying this representation !). Generated files are much smaller
than with the <code>-annot</code> option, and much faster to write (during
compilation) and to read (for analysis).</p>
<h2>New Options for ocamldep</h2>
<p>As requested on the bug tracker, we implemented a set of new options for ocamldep:</p>
<ul>
<li>
<p><code>-all</code> will print all the dependencies, i.e. not only on .cmi, .cmo and .cmx files, but also on source files, and for .o files. In this mode also, no proxying is performed: if there is no interface file, a bytecode dependency will still appear against the .cmi file, and not against the .cmo file as it would before;</p>
</li>
<li>
<p><code>-one-line</code> will not break dependencies on several lines;</p>
</li>
<li>
<p><code>-sort</code> will print the arguments of ocamldep (filenames) in the order of dependencies, so that the following command should work when all source files are in the same directory:</p>
</li>
</ul>
<pre><code class="language-shell-session">ocamlopt -o my_program `ocamldep -sort *.ml *.mli
</code></pre>
<h2>CFI Directives for Debugging</h2>
<p>OCaml tries to make the best use of available registers and stack
space, and consequently, its layout on the stack is much different
from the one of C functions. Also, function names are mangled to make
them local to their module. As a consequence, debugging native code
OCaml programs has long been a problem with previous versions of
OCaml:, since the debugger cannot print correctly the backtrace of the
stack, nor put breakpoints on OCaml functions.</p>
<p>In OCaml 4.00.0, we worked on a patch submitted on the bug tracker to
improve the situation: x86 and amd64 backends now emit more debugging
directives, such as the locations in the source corresponding to
functions in the assembly (so that you can put breakpoints at function
entry), and CFI directives, indicating the correct stack layout, for
the debugger to correctly unwind the stack. These directives are part
of the DWARF debugging standard.</p>
<p>Unfortunately, line by line stepping is not yet available, but here is an example of session that was not possible with previous versions:</p>
<pre><code class="language-ocaml">let f x y = List.map ( (+) x ) y
let _ = f 3 [1;2;3;4]
</code></pre>
<pre><code class="language-shell-session">$ ocamlopt -g toto.ml
$ gdb ./a.out
(gdb) b toto.ml:1
Breakpoint 1 at 0x4044f4: file toto.ml, line 1.
(gdb) run
Starting program: /home/lefessan/ocaml-4.00.0-example/a.out
Breakpoint 1, 0x00000000004044f4 in camlToto__f_1008 () at toto.ml:1
1 let f x y = List.map ( (+) x ) y
(gdb) bt
0 0x00000000004044f4 in camlToto__f_1008 () at toto.ml:1
1 0x000000000040456c in camlToto__entry () at toto.ml:2
2 0x000000000040407d in caml_program ()
3 0x0000000000415fe6 in caml_start_program ()
4 0x00000000004164b5 in caml_main (argv=0x7fffffffe3f0) at startup.c:189
5 0x0000000000408cdc in main (argc=<optimized out>, argv=<optimized out>)
at main.c:56
(gdb)
</code></pre>
<h2>Optimisation of Partial Function Applications</h2>
<p>Few people know that partial applications with multiple arguments are
not very efficient. For example, do you know how many closures are
dynamically allocated in in the following example ?</p>
<pre><code class="language-ocaml">let f x y z = x + y + z
let sum_list_offsets orig list = List.fold_left (f orig) 0 list
let sum = sum_list_offsets 10 [1;2;3]
</code></pre>
<p>Most programmers would reply one, <code>f orig</code>, but that’s not all
(indeed, f and sum_list_offsets are allocated statically, not
dynamically, as they have no free variables). Actually, three more
closures are allocated, when <code>List.fold_left</code> is executed on the list,
one closure per element of the list.</p>
<p>The reason for this is that Ocaml has only two modes to execute
functions: either all arguments are present, or just one
argument. Prior to 4.00.0, when a function would enter the second mode
(as f in the previous example), then it would remain in that mode,
meaning that the two other arguments would be passed one by one,
creating a partial closure between them.</p>
<p>In 4.00.0, we implemented a simple optimization, so that whenever all
the remaining expected arguments are passed at once, no partial
closure is created and the function is immediatly called with all its
arguments, leading to only one dynamic closure creation in the
example.</p>
<h2>Optimized Pipe Operators</h2>
<p>It is sometimes convenient to use the pipe notation in OCaml programs, for example:</p>
<pre><code class="language-ocaml">let (|>) x f = f x;;
let (@@) f x = f x;;
[1;2;3] |> List.map (fun x -> x + 2) |> List.map print_int;;
List.map print_int @@ List.map (fun x -> x + 1 ) @@ [1;2;3];;
</code></pre>
<p>However, such <code>|></code> and <code>@@</code> operators are currently not optimized: for
example, the last line will be compiled as:</p>
<pre><code class="language-ocaml">let f1 = List.map print_int;;
let f2 = List.map (fun x -> x + 1);;
let x = f2 [1;2;3;];;
f1 x;;
</code></pre>
<p>Which means that partial closures are allocated every time a function
is executed with multiple arguments.</p>
<p>In OCaml 4.00.0, we optimized these operators by providing native
operators, for which no partial closures are generated:</p>
<pre><code class="language-ocaml">external (|>) : ‘a -> (‘a -> ‘b) -> ‘b = "%revapply";;
external ( @@ ) : (‘a -> ‘b) -> ‘a -> ‘b = "%apply"
</code></pre>
<p>Now, the previous example is equivalent to:</p>
<pre><code class="language-ocaml">List.map print_int (List.map ( (+) 1 ) [1;2;3])
</code></pre>
<h2>Bug Fixing</h2>
<p>Of course, a lot of our contributions are not always as visible as the
previous ones. We also spent a lot of time fixing small bugs. Although
it doesn’t sound very fun, fixing bugs in OCaml is also fun, because
bugs are often challenging to understand, and even more challenging to
remove without introducing new ones !</p>
Profiling OCaml amd64 code under Linuxhttps://ocamlpro.com/blog/2012_08_08_profiling_ocaml_amd64_code_under_linux2012-08-08T08:12:13Z2012-08-08T08:12:13Z
Çagdas Bozman
We have recently worked on modifying the OCaml system to be able to profile OCaml code on Linux amd64 systems, using the processor performance counters now supported by stable kernels. This page presents this work, funded by Jane Street. The patch is provided for OCaml version 4.00.0. If you need it...<p>We have recently worked on modifying the OCaml system to be able to profile OCaml code on Linux amd64 systems, using the processor performance counters now supported by stable kernels. This page presents this work, funded by Jane Street.</p>
<p>The patch is provided for OCaml version 4.00.0. If you need it for 3.12.1, some more work is required, as we would need to backport some improvements that were already in the 4.00.0 code generator.</p>
<h2 class="page-subtitle">
An example: profiling <code>ocamlopt.opt</code>
</h2>
<p>Here is an example of a session of profiling done using both Linux performance tools and a modified OCaml 4.00.0 system (the patch is available at the end of this article).</p>
<p>Linux performance tools are available as part of the Linux kernel (in the <code>linux-tools</code> package on Debian/Ubuntu). Most of the tools are invoked through the <code>perf</code> command, à la git. For example, we are going to check where the time is spent when calling the <code>ocamlopt.opt</code> command:</p>
<pre><code class="language-bash">perf record -g ./ocamlopt.opt -c -I utils -I parsing -I typing typing/*.ml
</code></pre>
<p>This command generates a file <code>perf.data</code> in the current directory, containing all the events that were received during the execution of the command. These events contain the values of the performance counters in the amd64 processor, and the call-chain (backtrace) at the event.</p>
<p>We can inspect this file using the command:</p>
<pre><code class="language-bash">perf report -g
</code></pre>
<p>The command displays:</p>
<pre><code class="language-bash">Events: 3K cycles
+ 9.81% ocamlopt.opt ocamlopt.opt [.] compare_val
+ 8.85% ocamlopt.opt ocamlopt.opt [.] mark_slice
+ 7.75% ocamlopt.opt ocamlopt.opt [.] caml_page_table_lookup
+ 7.40% as as [.] 0x5812
+ 5.60% ocamlopt.opt [kernel.kallsyms] [k] 0xffffffff8103d0ca
+ 3.91% ocamlopt.opt ocamlopt.opt [.] sweep_slice
+ 3.18% ocamlopt.opt ocamlopt.opt [.] caml_oldify_one
+ 3.14% ocamlopt.opt ocamlopt.opt [.] caml_fl_allocate
+ 2.84% as [kernel.kallsyms] [k] 0xffffffff81317467
+ 1.99% ocamlopt.opt ocamlopt.opt [.] caml_c_call
+ 1.99% ocamlopt.opt ocamlopt.opt [.] caml_compare
+ 1.75% ocamlopt.opt ocamlopt.opt [.] camlSet__mem_1148
+ 1.62% ocamlopt.opt ocamlopt.opt [.] caml_oldify_mopup
+ 1.58% ocamlopt.opt ocamlopt.opt [.] camlSet__bal_1053
+ 1.46% ocamlopt.opt ocamlopt.opt [.] camlSet__add_1073
+ 1.37% ocamlopt.opt libc-2.15.so [.] 0x15cbd0
+ 1.37% ocamlopt.opt ocamlopt.opt [.] camlInterf__compare_1009
+ 1.33% ocamlopt.opt ocamlopt.opt [.] caml_apply2
+ 1.09% ocamlopt.opt ocamlopt.opt [.] caml_modify
+ 1.07% sh [kernel.kallsyms] [k] 0xffffffffa07e16fd
+ 1.07% as libc-2.15.so [.] 0x97a61
+ 0.94% ocamlopt.opt ocamlopt.opt [.] caml_alloc_shr
</code></pre>
<p>Using the arrow keys and the <code>Enter</code> key to expand an item, we can get a better idea of where most of the time is spent:</p>
<pre><code class="language-bash">Events: 3K cycles
+ 9.81% ocamlopt.opt ocamlopt.opt [.] compare_val
- compare_val
- 71.68% camlSet__mem_1148
+ 98.01% camlInterf__add_interf_1121
+ 1.99% camlInterf__add_pref_1158
- 21.48% camlSet__add_1073
- camlSet__add_1073
+ 93.41% camlSet__add_1073
+ 6.59% camlInterf__add_interf_1121
+ 1.44% camlReloadgen__fun_1386
+ 1.43% camlClosure__close_approx_var_1373
+ 1.43% camlSwitch__opt_count_1239
+ 1.34% camlTbl__add_1050
+ 1.20% camlEnv__find_1408
+ 8.85% ocamlopt.opt ocamlopt.opt [.] mark_slice
- 7.75% ocamlopt.opt ocamlopt.opt [.] caml_page_table_lookup
- caml_page_table_lookup
+ 50.03% camlBtype__set_commu_1704
+ 49.97% camlCtype__expand_head_1923
+ 7.40% as as [.] 0x5812
+ 5.60% ocamlopt.opt [kernel.kallsyms] [k] 0xffffffff8103d0ca
+ 3.91% ocamlopt.opt ocamlopt.opt [.] sweep_slice
Press `?` for help on key bindings
</code></pre>
<p>We notice that a lot of time is spent in the <code>compare_val</code> primitive, called from the <code>Pervasives.compare</code> function, itself called from the <code>Set</code> module in <code>asmcomp/interp.ml</code>. We can locate the corresponding code at the beginning of the file:</p>
<pre><code class="language-ocaml">module IntPairSet =
Set.Make(struct type t = int * int let compare = compare end)
</code></pre>
<p>Let's replace the polymorphic function <code>compare</code> by a monomorphic function, optimized for pairs of small ints:</p>
<pre><code class="language-ocaml">module IntPairSet =
Set.Make(struct type t = int * int
let compare (a1,b1) (a2,b2) =
if a1 = a2 then b1 - b2 else a1 - a2
end)
</code></pre>
<p>We can now compare the speed of the two versions:</p>
<pre><code class="language-bash">peerocaml:~/ocaml-4.00.0% time ./ocamlopt.old -c -I utils -I parsing -I typing typing/.ml
./ocamlopt.old 7.38s user 0.56s system 97% cpu 8.106 total
peerocaml:~/ocaml-4.00.0% time ./ocamlopt.new -c -I utils -I parsing -I typing typing/.ml
./ocamlopt.new 6.16s user 0.50s system 97% cpu 6.827 total
</code></pre>
<p>And we get an interesting speedup ! Now, we can iterate the process, check where most of the time is spent in the new version, optimize the critical path and so on.</p>
<h2 class="page-subtitle">
Installation of the modified OCaml system
</h2>
<p>A modified OCaml system is required because, for each event, the Linux kernel must attach a backtrace of the stack (call-chain). However, the kernel is not able to use standard DWARF debugging information, and current OCaml stack frames are too complex to be unwinded without this DWARF information. Instead, we had to modify OCaml code generator to follow the same conventions as C for frame pointers, i.e. using saving the frame pointer on function entry and restoring it on function exit. This required to decrease the number of available registers from 13 to 12, using <code>%rbp</code> as the frame pointer, leading to an average 3-5% slowdown in execution time.</p>
<p>The patch for OCaml 4.00.0 is available here:</p>
<p><a href="http://ocamlpro.com//files/omit-frame-pointer-4.00.0.patch">omit-frame-pointer-4.00.0.patch</a> (20 kB, v2, updated 2012/08/13)</p>
<p>To use it, you can use the following recipe, that will compile and install the patched version in ~/ocaml-4.00-with-fp.</p>
<pre><code class="language-shell-session">$ wget http://caml.inria.fr/pub/distrib/ocaml-4.00.0/ocaml-4.00.0.tar.gz
$ tar zxf ~/ocaml-4.00.0.tar.gz
$ cd ocaml-4.00.0
$~/ocaml-4.00.0% wget ocamlpro.com/files/omit-frame-pointer-4.00.0.patch
$~/ocaml-4.00.0% patch -p1 < omit-frame-pointer-4.00.0.patch
$~/ocaml-4.00.0% ./configure -prefix ~/ocaml-4.00-with-fp
$~/ocaml-4.00.0% make world opt opt.opt install
$~/ocaml-4.00.0% cd ~
$ export PATH=$HOME/ocaml-4.00.0-with-fp/bin:$PATH
</code></pre>
<p>It is important to know that the patch modifies OCaml calling convention, meaning that ALL THE MODULES AND LIBRARIES in your application must be recompiled with this version.</p>
<p>On our benchmarks, the slowdown induced by the patch is between 3 and 5%. You can still compile your application without frame pointers, for production, using a new option <code>-fomit-frame-pointer</code> that was added by the patch.</p>
<p>This patch has been submitted for inclusion in OCaml. You can follow its status and contribute to the discussion here:
<a href="http://caml.inria.fr/mantis/view.php?id=5721">http://caml.inria.fr/mantis/view.php?id=5721</a></p>
Packing and Functorshttps://ocamlpro.com/blog/2011_08_10_packing_and_functors2011-08-10T08:12:13Z2011-08-10T08:12:13Z
Fabrice Le Fessant
We have recently worked on modifying the OCaml system to be able to pack a set of modules within a functor, parameterized on some signatures. This page presents this work, funded by Jane Street. All the patches on this page are provided for OCaml version 3.12.1. Packing Functors Installation of the ...<p>We have recently worked on modifying the OCaml system to be able to
pack a set of modules within a functor, parameterized on some
signatures. This page presents this work, funded by Jane Street.</p>
<p>All the patches on this page are provided for OCaml version 3.12.1.</p>
<h2>Packing Functors</h2>
<h3>Installation of the modified OCaml system</h3>
<p>The patch for OCaml 3.12.1 is available here:</p>
<pre><code class="language-shell-session">ocaml+libfunctor-3.12.1.patch.gz (26 kB)
</code></pre>
<p>To use it, you can use the following recipe, that will compile and
install the patched version in <code>~/ocaml+libfunctor-3.12.1/bin/</code>.</p>
<pre><code class="language-shell-session">~% wget http://caml.inria.fr/pub/distrib/ocaml-3.12/ocaml-3.12.1.tar.gz
~% tar zxf ~/ocaml-3.12.1.tar.gz
~% cd ocaml-3.12.1
~/ocaml-3.12.1% wget ocamlpro.com/code/ocaml+libfunctor-3.12.1.patch.gz
~/ocaml-3.12.1% gzip -d ocaml+libfunctor-3.12.1.patch.gz
~/ocaml-3.12.1% patch -p1 < ocaml+libfunctor-3.12.1.patch
~/ocaml-3.12.1% ./configure –prefix ~/ocaml+libfunctor-3.12.1
~/ocaml-3.12.1% make coldstart
~/ocaml-3.12.1% make ocamlc ocamllex ocamltools
~/ocaml-3.12.1% make library-cross
~/ocaml-3.12.1% make bootstrap
~/ocaml-3.12.1% make all opt opt.opt
~/ocaml-3.12.1% make install
~/ocaml-3.12.1% cd ~
~% export PATH=$HOME/ocaml+libfunctor-3.12.1/bin:$PATH
</code></pre>
<p>Note that it needs to bootstrap the compiler, as the format of object
files is not compatible with the one of ocaml-3.12.1.</p>
<h3>Usage of the lib-functor patch.</h3>
<p>Now that you are equiped with the new system, you can start using it. The lib-functor patch adds two new options to the compilers ocamlc and ocamlopt:</p>
<ul>
<li>
<p><code>-functor <interface_file></code> : this option is used to specify that the current module is compiled with the interface files specifying the argument of the functor. This option should be used together with -for-pack <module>, where <module> is the name of the module in which the current module will be embedded.</p>
</li>
<li>
<p><code>-pack-functor <module></code> : this option is used to pack the modules. It should be used with the option -o <object_file> to specify in which module it should be embedded. The <module> specified with -pack-functor specifies the name of functor that will be created in the target object file.</p>
</li>
</ul>
<p>If the interface x.mli contains :</p>
<pre><code class="language-ocaml">type t
val compare : t -> t -> int
</code></pre>
<p>and the files <code>xset.ml</code> and <code>xmap.ml</code> contain respectively :</p>
<pre><code class="language-ocaml">module T = Set.Make(X)
</code></pre>
<pre><code class="language-ocaml">module T = Map.Make(X)
</code></pre>
<p>Then :</p>
<pre><code class="language-shell-session">~/test% ocamlopt -c -for-pack Xx -functor x.cmi xset.ml
~/test% ocamlopt -c -for-pack Xx -functor x.cmi xmap.ml
~/test% ocamlopt -pack-functor MakeSetAndMap -o xx.cmx xset.cmx xmap.cmx
</code></pre>
<p>will construct a compiled unit whose signature is (that you can get
with <code>ocamlopt -i xx.cmi</code>, see below) :</p>
<pre><code class="language-ocaml">module MakeSetAndMap :
functor (X : sig type t val compare : t -> t -> int end) -> sig
module Xset : sig
module T : sig
type elt = X.t
type t = Set.Make(X).t
val empty : t
val is_empty : t -> bool
…
end
end
module Xmap : sig
module T : sig
type key = X.t
type ‘a t = ‘a Map.Make(X).t
val empty : ‘a t
val is_empty : ‘a t -> bool
…
end
end
end
</code></pre>
<h3>Other extension: printing interfaces</h3>
<p>OCaml only allows you to print the interface of a module or interface
by compiling its source with the -i option. However, you don’t always
have the source of an object interface (in particular, if it was
generated by packing), and you might still want to do it.</p>
<p>In such a case, the lib-functor patch allows you to do that, by using
the -i option on an interface object file:</p>
<pre><code class="language-shell-session">~/test% cat > a.mli
val x : int
~/test% ocamlc -c -i a.mli
val x : int
~/test% ocamlc -c -i a.cmi
val x : int
</code></pre>
<h3>Other extension: packing interfaces</h3>
<p>OCaml only allows you to pack object files inside another object file
(.cmo or .cmx). When doing so, you can either provide an source
interface (.mli) that you need to compile to provide the corresponding
object interface (.cmi), or the object interface will be automatically
generated by exporting all the sub-modules within the packed module.</p>
<p>However, sometimes, you would want to be able to specify the
interfaces of each module separately, so that:</p>
<ul>
<li>
<p>you can reuse most of the interfaces you already specified</p>
</li>
<li>
<p>you can use a different interface for a module, that the one used to
compile the other modules. This happens when you want to export more
values to the other internal sub-modules than you want to export to
the user.</p>
</li>
</ul>
<p>In such a case, the lib-functor patch allows you to do that, by using
the -pack option on interface object files:</p>
<pre><code class="language-shell-session">test% cat > a.mli
val x : int
test% cat > b.mli
val y : string
test% ocamlc -c a.mli b.mli
test% ocamlc -pack -o c.cmi a.cmi b.cmi
test% ocamlc -i c.cmi
module A : sig val x : int end
module B : sig val y : string end
</code></pre>
<h2>Using <code>ocp-pack</code> to pack source files</h2>
<h3>Installation of ocp-pack</h3>
<p>Download the source file from:</p>
<p><code>ocp-pack-1.0.1.tar.gz</code> (20 kB, GPL Licence, Copyright OCamlPro SAS)</p>
<p>Then, you just need to compile it with:</p>
<pre><code class="language-shell-session">~% tar zxf ocp-pack-1.0.1.tar.gz
~% cd ocp-pack-1.0.1
~/ocp-pack-1.0.1% make
~/ocp-pack-1.0.1% make install
</code></pre>
<h3>Usage of <code>ocp-pack</code></h3>
<p><code>ocp-pack</code> can be used to pack source files of modules within just one
source file. It allows you to avoid the use of the <code>-pack</code> option, that
is not always supported by all ocaml tools (for example,
<code>ocamldoc</code>). Moreover, <code>ocp-pack</code> tries to provide the correct locations
to the compiler, so errors are not reported within the generated
source file, but within the original source files.</p>
<p>It supports the following options:</p>
<pre><code class="language-shell-session">% ocp-pack -help
Usage:
ocp-pack -o target.ml [options] files.ml*
Options:
-o <filename.ml> generate filename filename.ml
-rec use recursive modules
all .ml files must have a corresponding .mli file
-pack-functor <modname> create functor with name <modname>
-functor <filename.mli> use filename as an argument for functor
-mli output the .mli file too
.ml files without .mli file will not export any value
-no-ml do not output the .ml file
-with-ns use directory structure to create a hierarchy of modules
-v increment verbosity
–version display version information
</code></pre>
<p><code>ocp-pack</code> automatically detects interface sources and implementation
sources. When only the interface source is available, it is assumed
that it is a type-only module, i.e. no val items are present inside.</p>
<p>Here is an example of using <code>ocp-pack</code> to build the ocamlgraph package:</p>
<pre><code class="language-shell-session">test% ocp-pack -o graph.ml
lib/bitv.ml lib/heap.ml lib/unionfind.ml
src/sig.mli src/dot_ast.mli src/sig_pack.mli
src/version.ml src/util.ml src/blocks.ml
src/persistent.ml src/imperative.ml src/delaunay.ml
src/builder.ml src/classic.ml src/rand.ml src/oper.ml
src/path.ml src/traverse.ml src/coloring.ml src/topological.ml
src/components.ml src/kruskal.ml src/flow.ml src/graphviz.ml
src/gml.ml src/dot_parser.ml src/dot_lexer.ml src/dot.ml
src/pack.ml src/gmap.ml src/minsep.ml src/cliquetree.ml
src/mcs_m.ml src/md.ml src/strat.ml
test% ocamlc -c graph.ml
test% ocamlopt -c graph.ml
</code></pre>
<p>The -with-ns option can be used to automatically build a hierarchy of
modules. With that option, sub-directories are seen as
sub-modules. For example, packing a/x.ml, a/y.ml and b/z.ml will give
a result like:</p>
<p>[code language=”fsharp”]
module A = struct
module X = struct … end
module Y = struct … end
end
module B = struct
module Z = struct … end
end
[/code]</p>
<h3>Packing modules as functors</h3>
<p>The <code>-pack-functor</code> and <code>-functor</code> options provide the same behavior
as the same options with the lib-functor patch. The only difference is
that <code>-functor</code> takes the interface source as argument, not the
interface object.</p>
<h3>Packing recursive modules</h3>
<p>When trying to pack modules with <code>ocp-pack</code>, you might discover that
your toplevel modules have recursive dependencies. This is usually
achieved by types declared abstract in the interfaces, but depending
on each other in the implementations. Such modules cannot simply
packed by <code>ocp-pack</code>.</p>
<p>To handle them, <code>ocp-pack</code> provides a <code>-rec</code> option. With that option,
modules are put within a module rec construct, and are all required to
be accompagnied by an interface source file.</p>
<p>Moreover, in many cases, OCaml is not able to compile such recursive modules:</p>
<ul>
<li>
<p>For typing reasons: recursive modules are typed in an environment
containing only an approximation of other recursive modules
signatures</p>
</li>
<li>
<p>For code generation reasons: recursive modules can be reordered
depending on their shape, and this reordering can generate an order
that is actually not safe, leading to an exception at runtime</p>
</li>
</ul>
<p>To solve these two issues in most cases, you can use the following
patch (you can apply it using the same recipe as for lib-functor, and
even apply both patches on the same sources):</p>
<ul>
<li><code>ocaml+rec-3.12.1.patch.gz</code>
</li>
</ul>
<p>With this patch, recursive modules are typed in an environment that is
enriched progressively with the final types of the modules as soon as
they become available. Also, during code generation, a topological
order is computed on the recursive modules, and the subset of modules
that can be initialized using in that topological order are immediatly
generated, leaving only the other modules to be reordered.</p>
OCaml and Windowshttps://ocamlpro.com/blog/2011_06_23_ocaml_and_windows2011-06-23T08:12:13Z2011-06-23T08:12:13Z
Fabrice Le Fessant
Recently, I have been experimenting wiht OCaml / MSVC running on Windows 7 64bit. I have mainly followed what the OCaml’s README.win32 was saying and I learned some NSIS tricks. The result of this experiment is the following two (rather big) windows binaries : ocaml-trunk-64-installer.exe (92 MB)
...<p>Recently, I have been experimenting wiht OCaml / MSVC running on Windows 7 64bit. I have mainly followed what the <a href="http://caml.inria.fr/pub/distrib/ocaml-3.12/notes/README.win32">OCaml’s README.win32</a> was saying and I learned some NSIS tricks. The result of this experiment is the following two (rather big) windows binaries :</p>
<ul>
<li><a href="http://ocamlpro.com//files/ocaml-trunk-64-installer.exe">ocaml-trunk-64-installer.exe</a> (92 MB)
</li>
<li><a href="http://ocamlpro.com//files/ocaml-3.12-64-installer.exe">ocaml-3.12-64-installer.exe</a> (92 MB)
</li>
</ul>
<p>These binaries are auto-installer for :</p>
<ul>
<li>the OCaml distribution (either the 3.12.1+rc1 version or trunk);
</li>
<li>Emacs (version 23.3) + tuareg mode (version 2.0.4);
</li>
<li>OCamlGraph (version 1.7) : this is just a little experiment with packaging external libraries.
</li>
</ul>
<p>Hopefully, all of this might be useful to some people, at least to people looking for an alternative to WinOcaml which seems to be broken. You should need no other dependencies if you just want to use the OCaml top-level (ocaml.exe). If you want to compile your project you will need MSVC installed and correctly set-up. If your project is using Makefiles then you should probably install cygwin as well. I can give more details if some people are interested.</p>
<p>Unfortunately, the current process for creating these binaries involves an awlful lot of manual steps (including switching for Windows Termninal to cygwin shell) and further, many OCaml packages won’t install directly on windows (as most of them are using shell tricks to be configured correctly). I hope we will be able to release something cleaner in a later stage.</p>
OCaml Cheat Sheetshttps://ocamlpro.com/blog/2011_06_03_ocaml_cheat_sheets2011-06-03T08:12:13Z2011-06-03T08:12:13Z
Fabrice Le Fessant
When you are beginning in a new programming language, it is sometimes helpful to have an overview of the documentation, that you can pin on your wall and easily have a look at it while you are programming. Since we couldn’t find such Cheat Sheets, we decided to start writting our own cheat sheets ...<p>When you are beginning in a new programming language, it is sometimes helpful to have an overview of the documentation, that you can pin on your wall and easily have a look at it while you are programming. Since we couldn’t find such Cheat Sheets, we decided to start writting our own cheat sheets for OCaml.</p>
<p>Beware, these documents are drafts, that we plan to improve in the next months. In the meantime, feel free to tell us how we could improve them, what is missing, and where the focus should be !</p>
<ul>
<li><a href="https://ocamlpro.github.io/ocaml-cheat-sheets/ocaml-lang.pdf">The OCaml Language</a> (June 8, 2011)
</li>
<li><a href="https://ocamlpro.github.io/ocaml-cheat-sheets/ocaml-tools.pdf">OCaml Standard Tools</a> (June 7, 2011)
</li>
<li><a href="https://ocamlpro.github.io/ocaml-cheat-sheets/ocaml-stdlib.pdf">OCaml Standard Library</a> (June 7, 2011)
</li>
<li><a href="https://ocamlpro.github.io/ocaml-cheat-sheets/tuareg-mode.pdf">OCaml Emacs Mode (Tuareg)</a> (June 27, 2011)
</li>
</ul>
OCaml 32bits longvalhttps://ocamlpro.com/blog/2011_05_06_ocaml_32bits_longval2011-05-06T08:12:13Z2011-05-06T08:12:13Z
Fabrice Le Fessant
You will need OCaml 3.11.2 installed on a i686 linux computer. The archive contains: libcamlrun-linux-i686.a
ocamlrun-linux-i686
Makefile
README The Makefile has two targets: sudo make install will save /usr/bin/ocamlrun and /usr/lib/ocaml/libcamlrun.a in the current directory and replace them with ...<p>You will need OCaml 3.11.2 installed on a i686 linux computer. The archive contains:</p>
<ul>
<li>libcamlrun-linux-i686.a
</li>
<li>ocamlrun-linux-i686
</li>
<li>Makefile
</li>
<li>README
</li>
</ul>
<p>The Makefile has two targets:</p>
<ul>
<li><code>sudo make install</code> will save <code>/usr/bin/ocamlrun</code> and <code>/usr/lib/ocaml/libcamlrun.a</code> in the current directory and replace them with the longval binaries.
</li>
<li><code>sudo make restore</code> will restore the saved files.
</li>
</ul>
<p>If your install directories are not the default ones, you should modify the Makefile. After installing, you can test it with the standard OCaml top-level:</p>
<p><code>Objective Caml version 3.11.2</code></p>
<pre><code class="language-Ocaml">
# let s = ref “”;;
val s : string ref = {contents = “”}
# s := String.create 20_000_000;;
– : unit = ()
</code></pre>
<p>Now you can enjoy big values in all your strings and arrays in
bytecode. You will need to relink all your custom binaries. If you are
interested in the native version of the longval compiler, you can
<a href="mailto:contact@ocamlpro.com">contact</a> us.</p>