> > to decouple the storage schemes from the algorithms so that you can write
> > O(M + N) code to support O(M*N) template instances. And there is no point
> if
> > this abstraction had a severe penalty on the runtime performance. BGL
> > exactly knows which algorithm to use for a particular storage mode
> because
> > intelligence is built into it using traits. The hierarchial design (which
> is
> this is a misconception about bgl and c++ generic programming in general
>
> you can treat certain aspects of the algorithms but not everything
> genericity is only available where the designers thought about it
> (eg you cannot reuse a stack in dfs graph traversals of various graphs
> because it is hardcoded in boost)
>
> this usually ends up in a lot of configurability where users don't
> need it, but the result is still not flexible enough for practical
> problems
>
I really haven't used bgl in any serious way like a project or so. So maybe
what you're saying is true.
BGL's flexbility might become a problem in itself as it requires knowing so
much that, it becomes a library by experts and for experts. This is a rather
unfortunate outcome. I am far from a language expert but I think such
problems arise because syntactically C++ does not provide sufficient
flexibility for expressing abstractions without writing extensive amounts of
code to make it abundantly clear as to what you're trying to do.
that's why everyone ends up implementing their own containers instead
> of using stl in a bigger c++ project
> (eg by adding concurrency support or using different tradeoffs in the
> algorithm design, so eventhough stl is very generic it is still not
> flexible enough)
>
> intuition tells that generic code allows easy replacement of components
> but this is a fallacy:
> - the designer must know how the code will be used in advance
> and decompose the algorithms that way
> - the users need to know a lot more about the internal implementation,
> how an abstract component interacts with others
> (the more fine grained the generic code the more difficult to use it)
> (it's not true that you can easily write your own graph represantation
> to use it with boost you need to know a lot of details)
> (from the docs a lot of semantic detail is not clear, eg can you modify
> a graph from a visitor? boost uses iterators and in many datastructure
> modifications invalidates them)
> - when an algorithm is used in the most simple and usual way, genericity
> causes a lot of unnecessary boilerplate, confusion about the api, and
> cryptic compiler error messages
> (to find the shortest path in a simple directed graph with int edge
> weights, i still have to fill the template arguments
>
> http://www.boost.org/doc/libs/1_43_0/libs/graph/example/dijkstra-example.cpp
> this is not elegant code)
>
You have a point here. I have written python code during my graph theory
course to implement algorithms on graphs and they turned out to be
remarkably clear and concise. I recall that being able to use dictionaries
to implement the adjacency lists and the lack of strong typing requirements
led to very readable code. Here's one that computes the all paths between
two vertices in a directed graph:
#!/usr/bin/env python
graph = {'A': ['B', 'C'],
'B': ['C', 'D'],
'C': ['D'],
'D': ['C'],
'E': ['F'],
'F': ['C']}
def find_all_paths(G, start, end, visited = []) :
for i in G[start] :
if i not in visited :
if i == end :
yield [start] + [i]
else :
for j in find_all_paths(G, i, end, visited + [start]) :
yield [start] + j
start, end = 'A', 'D'
for i in find_all_paths(graph, start, end):
print i
i experimented with various types of genericity for algorithms
> here is one approach:
> implement the algorithm in the simplest way for the most useful types
> etc. then when you need it for a specific task then copy the code and
> apply appropriate modifications in the source (often s/type1/type2/g)
>
> it turns out that with this approach it's often easier to change
> a component without screwing things up than by understanding all
> the gory details of a generic api like boost
>
> adapting an algorithm for your usecase usually takes less than an hour
> and the result is readable customized code, easy to debug, easy to edit
>
I had written some C code for a programming assignment and my experiences
parallel what you describe here. It had no dependencies on external
libraries and everything was done from scratch (ok that wasn't a choice I
had -- assignment remember :)). It turned out to be fairly simple to
implement and use and most importantly maintain. I can go back to the code
today and still understand most things quickly.
> > Template metaprogramming uses static polymorphism which is an entirely
> > different beast from OO inheritance that uses dynamic polymorphism.
> yes, each of them has quirks and the user of the library should deal
> them
> (using various concepts together the complexity of the result is the
> product of the complexity of each part, not the sum)
>
> my argument is that this amount of abstraction just not worth the effort
> (at least in c++)
>
You have put forward many good arguments that I am once again split on what
language I have to use primarily! I used to code a lot in C but then I saw
some neat stuff in template metaprogramming. But now, like you say, it seems
more than what I'd asked for. I think, keeping in mind what you said, I will
critically evaluate the language of choice for my new programming projects
and go for C++ only if there are strong reasons to do so. After all, pain
and effort have to compensated.
After all this I just feel that somewhere between C and C++ there's a great
language! :)
Chillu
Received on Wed Aug 11 2010 - 20:08:05 CEST
This archive was generated by hypermail 2.2.0 : Wed Aug 11 2010 - 20:12:02 CEST