We are exploring efficient techniques for reasoning under uncertainty in first-order domains. Given a knowledge base that contains sentences of first-order logic, each of which may be adorned with a degree of belief, i.e., a probability of truth, we would like to compute the degree of belief of a new sentence. Our technique involves transforming such a knowledge base into a probability distribution over possible worlds, which defines the degree of belief of every sentence of the language. Our current work involves constructing maximum entropy (or log-linear) distributions. We find that this approach has appealing inferential properties and a compact representation. In particular, maximum entropy distributions can be represented by graphical models whose structure corresponds to the structure of the original knowledge base, and conditional independences in such models can be leveraged to obtain efficient inference algorithms. Importantly, we have found that the complexity of probabilistic reasoning under these maximum entropy distributions is no harder than the corresponding deterministic inference problems.